public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
@ 2021-09-16  7:51 Gloria Zhao
  2021-09-19 23:16 ` Antoine Riard
  2021-09-20  9:19 ` Bastien TEINTURIER
  0 siblings, 2 replies; 16+ messages in thread
From: Gloria Zhao @ 2021-09-16  7:51 UTC (permalink / raw)
  To: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 19618 bytes --]

Hi there,

I'm writing to propose a set of mempool policy changes to enable package
validation (in preparation for package relay) in Bitcoin Core. These would
not
be consensus or P2P protocol changes. However, since mempool policy
significantly affects transaction propagation, I believe this is relevant
for
the mailing list.

My proposal enables packages consisting of multiple parents and 1 child. If
you
develop software that relies on specific transaction relay assumptions
and/or
are interested in using package relay in the future, I'm very interested to
hear
your feedback on the utility or restrictiveness of these package policies
for
your use cases.

A draft implementation of this proposal can be found in [Bitcoin Core
PR#22290][1].

An illustrated version of this post can be found at
https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
I have also linked the images below.

## Background

Feel free to skip this section if you are already familiar with mempool
policy
and package relay terminology.

### Terminology Clarifications

* Package = an ordered list of related transactions, representable by a
Directed
  Acyclic Graph.
* Package Feerate = the total modified fees divided by the total virtual
size of
  all transactions in the package.
    - Modified fees = a transaction's base fees + fee delta applied by the
user
      with `prioritisetransaction`. As such, we expect this to vary across
mempools.
    - Virtual Size = the maximum of virtual sizes calculated using [BIP141
      virtual size][2] and sigop weight. [Implemented here in Bitcoin
Core][3].
    - Note that feerate is not necessarily based on the base fees and
serialized
      size.

* Fee-Bumping = user/wallet actions that take advantage of miner incentives
to
  boost a transaction's candidacy for inclusion in a block, including Child
Pays
for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
mempool policy is to recognize when the new transaction is more economical
to
mine than the original one(s) but not open DoS vectors, so there are some
limitations.

### Policy

The purpose of the mempool is to store the best (to be most
incentive-compatible
with miners, highest feerate) candidates for inclusion in a block. Miners
use
the mempool to build block templates. The mempool is also useful as a cache
for
boosting block relay and validation performance, aiding transaction relay,
and
generating feerate estimations.

Ideally, all consensus-valid transactions paying reasonable fees should
make it
to miners through normal transaction relay, without any special
connectivity or
relationships with miners. On the other hand, nodes do not have unlimited
resources, and a P2P network designed to let any honest node broadcast their
transactions also exposes the transaction validation engine to DoS attacks
from
malicious peers.

As such, for unconfirmed transactions we are considering for our mempool, we
apply a set of validation rules in addition to consensus, primarily to
protect
us from resource exhaustion and aid our efforts to keep the highest fee
transactions. We call this mempool _policy_: a set of (configurable,
node-specific) rules that transactions must abide by in order to be accepted
into our mempool. Transaction "Standardness" rules and mempool restrictions
such
as "too-long-mempool-chain" are both examples of policy.

### Package Relay and Package Mempool Accept

In transaction relay, we currently consider transactions one at a time for
submission to the mempool. This creates a limitation in the node's ability
to
determine which transactions have the highest feerates, since we cannot take
into account descendants (i.e. cannot use CPFP) until all the transactions
are
in the mempool. Similarly, we cannot use a transaction's descendants when
considering it for RBF. When an individual transaction does not meet the
mempool
minimum feerate and the user isn't able to create a replacement transaction
directly, it will not be accepted by mempools.

This limitation presents a security issue for applications and users
relying on
time-sensitive transactions. For example, Lightning and other protocols
create
UTXOs with multiple spending paths, where one counterparty's spending path
opens
up after a timelock, and users are protected from cheating scenarios as
long as
they redeem on-chain in time. A key security assumption is that all parties'
transactions will propagate and confirm in a timely manner. This assumption
can
be broken if fee-bumping does not work as intended.

The end goal for Package Relay is to consider multiple transactions at the
same
time, e.g. a transaction with its high-fee child. This may help us better
determine whether transactions should be accepted to our mempool,
especially if
they don't meet fee requirements individually or are better RBF candidates
as a
package. A combination of changes to mempool validation logic, policy, and
transaction relay allows us to better propagate the transactions with the
highest package feerates to miners, and makes fee-bumping tools more
powerful
for users.

The "relay" part of Package Relay suggests P2P messaging changes, but a
large
part of the changes are in the mempool's package validation logic. We call
this
*Package Mempool Accept*.

### Previous Work

* Given that mempool validation is DoS-sensitive and complex, it would be
  dangerous to haphazardly tack on package validation logic. Many efforts
have
been made to make mempool validation less opaque (see [#16400][4],
[#21062][5],
[#22675][6], [#22796][7]).
* [#20833][8] Added basic capabilities for package validation, test accepts
only
  (no submission to mempool).
* [#21800][9] Implemented package ancestor/descendant limit checks for
arbitrary
  packages. Still test accepts only.
* Previous package relay proposals (see [#16401][10], [#19621][11]).

### Existing Package Rules

These are in master as introduced in [#20833][8] and [#21800][9]. I'll
consider
them as "given" in the rest of this document, though they can be changed,
since
package validation is test-accept only right now.

1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
`MAX_PACKAGE_SIZE=101KvB` total size [8]

   *Rationale*: This is already enforced as mempool ancestor/descendant
limits.
Presumably, transactions in a package are all related, so exceeding this
limit
would mean that the package can either be split up or it wouldn't pass this
mempool policy.

2. Packages must be topologically sorted: if any dependencies exist between
transactions, parents must appear somewhere before children. [8]

3. A package cannot have conflicting transactions, i.e. none of them can
spend
the same inputs. This also means there cannot be duplicate transactions. [8]

4. When packages are evaluated against ancestor/descendant limits in a test
accept, the union of all of their descendants and ancestors is considered.
This
is essentially a "worst case" heuristic where every transaction in the
package
is treated as each other's ancestor and descendant. [8]
Packages for which ancestor/descendant limits are accurately captured by
this
heuristic: [19]

There are also limitations such as the fact that CPFP carve out is not
applied
to package transactions. #20833 also disables RBF in package validation;
this
proposal overrides that to allow packages to use RBF.

## Proposed Changes

The next step in the Package Mempool Accept project is to implement
submission
to mempool, initially through RPC only. This allows us to test the
submission
logic before exposing it on P2P.

### Summary

- Packages may contain already-in-mempool transactions.
- Packages are 2 generations, Multi-Parent-1-Child.
- Fee-related checks use the package feerate. This means that wallets can
create a package that utilizes CPFP.
- Parents are allowed to RBF mempool transactions with a set of rules
similar
  to BIP125. This enables a combination of CPFP and RBF, where a
transaction's descendant fees pay for replacing mempool conflicts.

There is a draft implementation in [#22290][1]. It is WIP, but feedback is
always welcome.

### Details

#### Packages May Contain Already-in-Mempool Transactions

A package may contain transactions that are already in the mempool. We
remove
("deduplicate") those transactions from the package for the purposes of
package
mempool acceptance. If a package is empty after deduplication, we do
nothing.

*Rationale*: Mempools vary across the network. It's possible for a parent
to be
accepted to the mempool of a peer on its own due to differences in policy
and
fee market fluctuations. We should not reject or penalize the entire
package for
an individual transaction as that could be a censorship vector.

#### Packages Are Multi-Parent-1-Child

Only packages of a specific topology are permitted. Namely, a package is
exactly
1 child with all of its unconfirmed parents. After deduplication, the
package
may be exactly the same, empty, 1 child, 1 child with just some of its
unconfirmed parents, etc. Note that it's possible for the parents to be
indirect
descendants/ancestors of one another, or for parent and child to share a
parent,
so we cannot make any other topology assumptions.

*Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents
makes it possible to fee-bump a batch of transactions. Restricting packages
to a
defined topology is also easier to reason about and simplifies the
validation
logic greatly. Multi-parent-1-child allows us to think of the package as
one big
transaction, where:

- Inputs = all the inputs of parents + inputs of the child that come from
  confirmed UTXOs
- Outputs = all the outputs of the child + all outputs of the parents that
  aren't spent by other transactions in the package

Examples of packages that follow this rule (variations of example A show
some
possibilities after deduplication): ![image][15]

#### Fee-Related Checks Use Package Feerate

Package Feerate = the total modified fees divided by the total virtual size
of
all transactions in the package.

To meet the two feerate requirements of a mempool, i.e., the pre-configured
minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
feerate, the
total package feerate is used instead of the individual feerate. The
individual
transactions are allowed to be below feerate requirements if the package
meets
the feerate requirements. For example, the parent(s) in the package can
have 0
fees but be paid for by the child.

*Rationale*: This can be thought of as "CPFP within a package," solving the
issue of a parent not meeting minimum fees on its own. This allows L2
applications to adjust their fees at broadcast time instead of overshooting
or
risking getting stuck/pinned.

We use the package feerate of the package *after deduplication*.

*Rationale*:  It would be incorrect to use the fees of transactions that are
already in the mempool, as we do not want a transaction's fees to be
double-counted for both its individual RBF and package RBF.

Examples F and G [14] show the same package, but P1 is submitted
individually before
the package in example G. In example F, we can see that the 300vB package
pays
an additional 200sat in fees, which is not enough to pay for its own
bandwidth
(BIP125#4). In example G, we can see that P1 pays enough to replace M1, but
using P1's fees again during package submission would make it look like a
300sat
increase for a 200vB package. Even including its fees and size would not be
sufficient in this example, since the 300sat looks like enough for the 300vB
package. The calculcation after deduplication is 100sat increase for a
package
of size 200vB, which correctly fails BIP125#4. Assume all transactions have
a
size of 100vB.

#### Package RBF

If a package meets feerate requirements as a package, the parents in the
transaction are allowed to replace-by-fee mempool transactions. The child
cannot
replace mempool transactions. Multiple transactions can replace the same
transaction, but in order to be valid, none of the transactions can try to
replace an ancestor of another transaction in the same package (which would
thus
make its inputs unavailable).

*Rationale*: Even if we are using package feerate, a package will not
propagate
as intended if RBF still requires each individual transaction to meet the
feerate requirements.

We use a set of rules slightly modified from BIP125 as follows:

##### Signaling (Rule #1)

All mempool transactions to be replaced must signal replaceability.

*Rationale*: Package RBF signaling logic should be the same for package RBF
and
single transaction acceptance. This would be updated if single transaction
validation moves to full RBF.

##### New Unconfirmed Inputs (Rule #2)

A package may include new unconfirmed inputs, but the ancestor feerate of
the
child must be at least as high as the ancestor feerates of every transaction
being replaced. This is contrary to BIP125#2, which states "The replacement
transaction may only include an unconfirmed input if that input was
included in
one of the original transactions. (An unconfirmed input spends an output
from a
currently-unconfirmed transaction.)"

*Rationale*: The purpose of BIP125#2 is to ensure that the replacement
transaction has a higher ancestor score than the original transaction(s)
(see
[comment][13]). Example H [16] shows how adding a new unconfirmed input can
lower the
ancestor score of the replacement transaction. P1 is trying to replace M1,
and
spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2
pays
100sat. Assume all transactions have a size of 100vB. While, in isolation,
P1
looks like a better mining candidate than M1, it must be mined with M2, so
its
ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
feerate, which is 6sat/vB.

In package RBF, the rule analogous to BIP125#2 would be "none of the
transactions in the package can spend new unconfirmed inputs." Example J
[17] shows
why, if any of the package transactions have ancestors, package feerate is
no
longer accurate. Even though M2 and M3 are not ancestors of P1 (which is the
replacement transaction in an RBF), we're actually interested in the entire
package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1, P2,
and
P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to only
allow
the child to have new unconfirmed inputs, either, because it can still
cause us
to overestimate the package's ancestor score.

However, enforcing a rule analogous to BIP125#2 would not only make Package
RBF
less useful, but would also break Package RBF for packages with parents
already
in the mempool: if a package parent has already been submitted, it would
look
like the child is spending a "new" unconfirmed input. In example K [18],
we're
looking to replace M1 with the entire package including P1, P2, and P3. We
must
consider the case where one of the parents is already in the mempool (in
this
case, P2), which means we must allow P3 to have new unconfirmed inputs.
However,
M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace M1
with this package.

Thus, the package RBF rule regarding new unconfirmed inputs is less strict
than
BIP125#2. However, we still achieve the same goal of requiring the
replacement
transactions to have a ancestor score at least as high as the original
ones. As
a result, the entire package is required to be a higher feerate mining
candidate
than each of the replaced transactions.

Another note: the [comment][13] above the BIP125#2 code in the original RBF
implementation suggests that the rule was intended to be temporary.

##### Absolute Fee (Rule #3)

The package must increase the absolute fee of the mempool, i.e. the total
fees
of the package must be higher than the absolute fees of the mempool
transactions
it replaces. Combined with the CPFP rule above, this differs from BIP125
Rule #3
- an individual transaction in the package may have lower fees than the
  transaction(s) it is replacing. In fact, it may have 0 fees, and the child
pays for RBF.

##### Feerate (Rule #4)

The package must pay for its own bandwidth; the package feerate must be
higher
than the replaced transactions by at least minimum relay feerate
(`incrementalRelayFee`). Combined with the CPFP rule above, this differs
from
BIP125 Rule #4 - an individual transaction in the package can have a lower
feerate than the transaction(s) it is replacing. In fact, it may have 0
fees,
and the child pays for RBF.

##### Total Number of Replaced Transactions (Rule #5)

The package cannot replace more than 100 mempool transactions. This is
identical
to BIP125 Rule #5.

### Expected FAQs

1. Is it possible for only some of the package to make it into the mempool?

   Yes, it is. However, since we evict transactions from the mempool by
descendant score and the package child is supposed to be sponsoring the
fees of
its parents, the most common scenario would be all-or-nothing. This is
incentive-compatible. In fact, to be conservative, package validation should
begin by trying to submit all of the transactions individually, and only
use the
package mempool acceptance logic if the parents fail due to low feerate.

2. Should we allow packages to contain already-confirmed transactions?

    No, for practical reasons. In mempool validation, we actually aren't
able to
tell with 100% confidence if we are looking at a transaction that has
already
confirmed, because we look up inputs using a UTXO set. If we have historical
block data, it's possible to look for it, but this is inefficient, not
always
possible for pruning nodes, and unnecessary because we're not going to do
anything with the transaction anyway. As such, we already have the
expectation
that transaction relay is somewhat "stateful" i.e. nobody should be relaying
transactions that have already been confirmed. Similarly, we shouldn't be
relaying packages that contain already-confirmed transactions.

[1]: https://github.com/bitcoin/bitcoin/pull/22290
[2]:
https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
[3]:
https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
[4]: https://github.com/bitcoin/bitcoin/pull/16400
[5]: https://github.com/bitcoin/bitcoin/pull/21062
[6]: https://github.com/bitcoin/bitcoin/pull/22675
[7]: https://github.com/bitcoin/bitcoin/pull/22796
[8]: https://github.com/bitcoin/bitcoin/pull/20833
[9]: https://github.com/bitcoin/bitcoin/pull/21800
[10]: https://github.com/bitcoin/bitcoin/pull/16401
[11]: https://github.com/bitcoin/bitcoin/pull/19621
[12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
[13]:
https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
[14]:
https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
[15]:
https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
[16]:
https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
[17]:
https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
[18]:
https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
[19]:
https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
[20]:
https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png

[-- Attachment #2: Type: text/html, Size: 22965 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-16  7:51 [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF Gloria Zhao
@ 2021-09-19 23:16 ` Antoine Riard
  2021-09-20 15:10   ` Gloria Zhao
  2021-09-20  9:19 ` Bastien TEINTURIER
  1 sibling, 1 reply; 16+ messages in thread
From: Antoine Riard @ 2021-09-19 23:16 UTC (permalink / raw)
  To: Gloria Zhao, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 29136 bytes --]

Hi Gloria,

> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.

IIUC, you have a package A+B+C submitted for acceptance and A is already in
your mempool. You trim out A from the package and then evaluate B+C.

I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?

Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.

E.g you're going to evaluate the package A+B and A' is already in your
mempool with a bigger valid witness. You trim A based on txid, then you
evaluate A'+B, which fails the fee checks. However, evaluating A+B would
have been a success.

AFAICT, the dedup rationale would be to save on CPU time/IO disk, to avoid
repeated signatures verification and parent UTXOs fetches ? Can we achieve
the same goal by bypassing tx-level checks for already-in txn while
conserving the package integrity for package-level checks ?

> Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.

I'm not clearly understanding the accepted topologies. By "parent and child
to share a parent", do you mean the set of transactions A, B, C, where B is
spending A and C is spending A and B would be correct ?

If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?

IIRC, one rationale to come with this topology limitation was to lower the
DoS risks when potentially deploying p2p packages.

Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.

E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.

That said, if you're broadcasting commitment transactions without
time-sensitive HTLC outputs, I think the batching is effectively a fee
saving as you don't have to duplicate the CPFP.

IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child.
I think it's the most conservative step still improving second-layer safety.

> *Rationale*:  It would be incorrect to use the fees of transactions that
are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.

I'm unsure about the logical order of the checks proposed.

If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats and
A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?

Note this situation is plausible, with current LN design, your counterparty
can have a commitment transaction with a better fee just by selecting a
higher `dust_limit_satoshis` than yours.

> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not
be
> sufficient in this example, since the 300sat looks like enough for the
300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
have
> a
> size of 100vB.

What problem are you trying to solve by the package feerate *after* dedup
rule ?

My understanding is that an in-package transaction might be already in the
mempool. Therefore, to compute a correct RBF penalty replacement, the vsize
of this transaction could be discarded lowering the cost of package RBF.

If we keep a "safe" dedup mechanism (see my point above), I think this
discount is justified, as the validation cost of node operators is paid for
?

> The child cannot replace mempool transactions.

Let's say you issue package A+B, then package C+B', where B' is a child of
both A and C. This rule fails the acceptance of C+B' ?

I think this is a footgunish API, as if a package issuer send the
multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
Then try to broadcast the higher-feerate C'+D' package, it should be
rejected. So it's breaking the naive broadcaster assumption that a
higher-feerate/higher-fee package always replaces ? And it might be unsafe
in protocols where states are symmetric. E.g a malicious counterparty
broadcasts first S+A, then you honestly broadcast S+B, where B pays better
fees.

> All mempool transactions to be replaced must signal replaceability.

I think this is unsafe for L2s if counterparties have malleability of the
child transaction. They can block your package replacement by opting-out
from RBF signaling. IIRC, LN's "anchor output" presents such an ability.

I think it's better to either fix inherited signaling or move towards
full-rbf.

> if a package parent has already been submitted, it would
> look
>like the child is spending a "new" unconfirmed input.

I think this is an issue brought by the trimming during the dedup phase. If
we preserve the package integrity, only re-using the tx-level checks
results of already in-mempool transactions to gain in CPU time we won't
have this issue. Package childs can add unconfirmed inputs as long as
they're in-package, the bip125 rule2 is only evaluated against parents ?

> However, we still achieve the same goal of requiring the
> replacement
> transactions to have a ancestor score at least as high as the original
> ones.

I'm not sure if this holds...

Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
sat/vb for 1000 vbytes.

Package A + B ancestor score is 10 sat/vb.

D has a higher feerate/absolute fee than B.

Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
sats + D's 1500 sats) /
A's 100 vb + C's 1000 vb + D's 100 vb)

Overall, this is a review through the lenses of LN requirements. I think
other L2 protocols/applications
could be candidates to using package accept/relay such as:
* https://github.com/lightninglabs/pool
* https://github.com/discreetlogcontracts/dlcspecs
* https://github.com/bitcoin-teleport/teleport-transactions/
* https://github.com/sapio-lang/sapio
* https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
* https://github.com/revault/practical-revault

Thanks for rolling forward the ball on this subject.

Antoine

Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> a écrit :

> Hi there,
>
> I'm writing to propose a set of mempool policy changes to enable package
> validation (in preparation for package relay) in Bitcoin Core. These would
> not
> be consensus or P2P protocol changes. However, since mempool policy
> significantly affects transaction propagation, I believe this is relevant
> for
> the mailing list.
>
> My proposal enables packages consisting of multiple parents and 1 child.
> If you
> develop software that relies on specific transaction relay assumptions
> and/or
> are interested in using package relay in the future, I'm very interested
> to hear
> your feedback on the utility or restrictiveness of these package policies
> for
> your use cases.
>
> A draft implementation of this proposal can be found in [Bitcoin Core
> PR#22290][1].
>
> An illustrated version of this post can be found at
> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
> I have also linked the images below.
>
> ## Background
>
> Feel free to skip this section if you are already familiar with mempool
> policy
> and package relay terminology.
>
> ### Terminology Clarifications
>
> * Package = an ordered list of related transactions, representable by a
> Directed
>   Acyclic Graph.
> * Package Feerate = the total modified fees divided by the total virtual
> size of
>   all transactions in the package.
>     - Modified fees = a transaction's base fees + fee delta applied by the
> user
>       with `prioritisetransaction`. As such, we expect this to vary across
> mempools.
>     - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
> Core][3].
>     - Note that feerate is not necessarily based on the base fees and
> serialized
>       size.
>
> * Fee-Bumping = user/wallet actions that take advantage of miner
> incentives to
>   boost a transaction's candidacy for inclusion in a block, including
> Child Pays
> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
> mempool policy is to recognize when the new transaction is more economical
> to
> mine than the original one(s) but not open DoS vectors, so there are some
> limitations.
>
> ### Policy
>
> The purpose of the mempool is to store the best (to be most
> incentive-compatible
> with miners, highest feerate) candidates for inclusion in a block. Miners
> use
> the mempool to build block templates. The mempool is also useful as a
> cache for
> boosting block relay and validation performance, aiding transaction relay,
> and
> generating feerate estimations.
>
> Ideally, all consensus-valid transactions paying reasonable fees should
> make it
> to miners through normal transaction relay, without any special
> connectivity or
> relationships with miners. On the other hand, nodes do not have unlimited
> resources, and a P2P network designed to let any honest node broadcast
> their
> transactions also exposes the transaction validation engine to DoS attacks
> from
> malicious peers.
>
> As such, for unconfirmed transactions we are considering for our mempool,
> we
> apply a set of validation rules in addition to consensus, primarily to
> protect
> us from resource exhaustion and aid our efforts to keep the highest fee
> transactions. We call this mempool _policy_: a set of (configurable,
> node-specific) rules that transactions must abide by in order to be
> accepted
> into our mempool. Transaction "Standardness" rules and mempool
> restrictions such
> as "too-long-mempool-chain" are both examples of policy.
>
> ### Package Relay and Package Mempool Accept
>
> In transaction relay, we currently consider transactions one at a time for
> submission to the mempool. This creates a limitation in the node's ability
> to
> determine which transactions have the highest feerates, since we cannot
> take
> into account descendants (i.e. cannot use CPFP) until all the transactions
> are
> in the mempool. Similarly, we cannot use a transaction's descendants when
> considering it for RBF. When an individual transaction does not meet the
> mempool
> minimum feerate and the user isn't able to create a replacement transaction
> directly, it will not be accepted by mempools.
>
> This limitation presents a security issue for applications and users
> relying on
> time-sensitive transactions. For example, Lightning and other protocols
> create
> UTXOs with multiple spending paths, where one counterparty's spending path
> opens
> up after a timelock, and users are protected from cheating scenarios as
> long as
> they redeem on-chain in time. A key security assumption is that all
> parties'
> transactions will propagate and confirm in a timely manner. This
> assumption can
> be broken if fee-bumping does not work as intended.
>
> The end goal for Package Relay is to consider multiple transactions at the
> same
> time, e.g. a transaction with its high-fee child. This may help us better
> determine whether transactions should be accepted to our mempool,
> especially if
> they don't meet fee requirements individually or are better RBF candidates
> as a
> package. A combination of changes to mempool validation logic, policy, and
> transaction relay allows us to better propagate the transactions with the
> highest package feerates to miners, and makes fee-bumping tools more
> powerful
> for users.
>
> The "relay" part of Package Relay suggests P2P messaging changes, but a
> large
> part of the changes are in the mempool's package validation logic. We call
> this
> *Package Mempool Accept*.
>
> ### Previous Work
>
> * Given that mempool validation is DoS-sensitive and complex, it would be
>   dangerous to haphazardly tack on package validation logic. Many efforts
> have
> been made to make mempool validation less opaque (see [#16400][4],
> [#21062][5],
> [#22675][6], [#22796][7]).
> * [#20833][8] Added basic capabilities for package validation, test
> accepts only
>   (no submission to mempool).
> * [#21800][9] Implemented package ancestor/descendant limit checks for
> arbitrary
>   packages. Still test accepts only.
> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>
> ### Existing Package Rules
>
> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
> consider
> them as "given" in the rest of this document, though they can be changed,
> since
> package validation is test-accept only right now.
>
> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
>    *Rationale*: This is already enforced as mempool ancestor/descendant
> limits.
> Presumably, transactions in a package are all related, so exceeding this
> limit
> would mean that the package can either be split up or it wouldn't pass this
> mempool policy.
>
> 2. Packages must be topologically sorted: if any dependencies exist between
> transactions, parents must appear somewhere before children. [8]
>
> 3. A package cannot have conflicting transactions, i.e. none of them can
> spend
> the same inputs. This also means there cannot be duplicate transactions.
> [8]
>
> 4. When packages are evaluated against ancestor/descendant limits in a test
> accept, the union of all of their descendants and ancestors is considered.
> This
> is essentially a "worst case" heuristic where every transaction in the
> package
> is treated as each other's ancestor and descendant. [8]
> Packages for which ancestor/descendant limits are accurately captured by
> this
> heuristic: [19]
>
> There are also limitations such as the fact that CPFP carve out is not
> applied
> to package transactions. #20833 also disables RBF in package validation;
> this
> proposal overrides that to allow packages to use RBF.
>
> ## Proposed Changes
>
> The next step in the Package Mempool Accept project is to implement
> submission
> to mempool, initially through RPC only. This allows us to test the
> submission
> logic before exposing it on P2P.
>
> ### Summary
>
> - Packages may contain already-in-mempool transactions.
> - Packages are 2 generations, Multi-Parent-1-Child.
> - Fee-related checks use the package feerate. This means that wallets can
> create a package that utilizes CPFP.
> - Parents are allowed to RBF mempool transactions with a set of rules
> similar
>   to BIP125. This enables a combination of CPFP and RBF, where a
> transaction's descendant fees pay for replacing mempool conflicts.
>
> There is a draft implementation in [#22290][1]. It is WIP, but feedback is
> always welcome.
>
> ### Details
>
> #### Packages May Contain Already-in-Mempool Transactions
>
> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.
>
> *Rationale*: Mempools vary across the network. It's possible for a parent
> to be
> accepted to the mempool of a peer on its own due to differences in policy
> and
> fee market fluctuations. We should not reject or penalize the entire
> package for
> an individual transaction as that could be a censorship vector.
>
> #### Packages Are Multi-Parent-1-Child
>
> Only packages of a specific topology are permitted. Namely, a package is
> exactly
> 1 child with all of its unconfirmed parents. After deduplication, the
> package
> may be exactly the same, empty, 1 child, 1 child with just some of its
> unconfirmed parents, etc. Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.
>
> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents
> makes it possible to fee-bump a batch of transactions. Restricting
> packages to a
> defined topology is also easier to reason about and simplifies the
> validation
> logic greatly. Multi-parent-1-child allows us to think of the package as
> one big
> transaction, where:
>
> - Inputs = all the inputs of parents + inputs of the child that come from
>   confirmed UTXOs
> - Outputs = all the outputs of the child + all outputs of the parents that
>   aren't spent by other transactions in the package
>
> Examples of packages that follow this rule (variations of example A show
> some
> possibilities after deduplication): ![image][15]
>
> #### Fee-Related Checks Use Package Feerate
>
> Package Feerate = the total modified fees divided by the total virtual
> size of
> all transactions in the package.
>
> To meet the two feerate requirements of a mempool, i.e., the pre-configured
> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
> feerate, the
> total package feerate is used instead of the individual feerate. The
> individual
> transactions are allowed to be below feerate requirements if the package
> meets
> the feerate requirements. For example, the parent(s) in the package can
> have 0
> fees but be paid for by the child.
>
> *Rationale*: This can be thought of as "CPFP within a package," solving the
> issue of a parent not meeting minimum fees on its own. This allows L2
> applications to adjust their fees at broadcast time instead of
> overshooting or
> risking getting stuck/pinned.
>
> We use the package feerate of the package *after deduplication*.
>
> *Rationale*:  It would be incorrect to use the fees of transactions that
> are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.
>
> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1, but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not be
> sufficient in this example, since the 300sat looks like enough for the
> 300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
> have a
> size of 100vB.
>
> #### Package RBF
>
> If a package meets feerate requirements as a package, the parents in the
> transaction are allowed to replace-by-fee mempool transactions. The child
> cannot
> replace mempool transactions. Multiple transactions can replace the same
> transaction, but in order to be valid, none of the transactions can try to
> replace an ancestor of another transaction in the same package (which
> would thus
> make its inputs unavailable).
>
> *Rationale*: Even if we are using package feerate, a package will not
> propagate
> as intended if RBF still requires each individual transaction to meet the
> feerate requirements.
>
> We use a set of rules slightly modified from BIP125 as follows:
>
> ##### Signaling (Rule #1)
>
> All mempool transactions to be replaced must signal replaceability.
>
> *Rationale*: Package RBF signaling logic should be the same for package
> RBF and
> single transaction acceptance. This would be updated if single transaction
> validation moves to full RBF.
>
> ##### New Unconfirmed Inputs (Rule #2)
>
> A package may include new unconfirmed inputs, but the ancestor feerate of
> the
> child must be at least as high as the ancestor feerates of every
> transaction
> being replaced. This is contrary to BIP125#2, which states "The replacement
> transaction may only include an unconfirmed input if that input was
> included in
> one of the original transactions. (An unconfirmed input spends an output
> from a
> currently-unconfirmed transaction.)"
>
> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
> transaction has a higher ancestor score than the original transaction(s)
> (see
> [comment][13]). Example H [16] shows how adding a new unconfirmed input
> can lower the
> ancestor score of the replacement transaction. P1 is trying to replace M1,
> and
> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2
> pays
> 100sat. Assume all transactions have a size of 100vB. While, in isolation,
> P1
> looks like a better mining candidate than M1, it must be mined with M2, so
> its
> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
> feerate, which is 6sat/vB.
>
> In package RBF, the rule analogous to BIP125#2 would be "none of the
> transactions in the package can spend new unconfirmed inputs." Example J
> [17] shows
> why, if any of the package transactions have ancestors, package feerate is
> no
> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
> the
> replacement transaction in an RBF), we're actually interested in the entire
> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
> P2, and
> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to only
> allow
> the child to have new unconfirmed inputs, either, because it can still
> cause us
> to overestimate the package's ancestor score.
>
> However, enforcing a rule analogous to BIP125#2 would not only make
> Package RBF
> less useful, but would also break Package RBF for packages with parents
> already
> in the mempool: if a package parent has already been submitted, it would
> look
> like the child is spending a "new" unconfirmed input. In example K [18],
> we're
> looking to replace M1 with the entire package including P1, P2, and P3. We
> must
> consider the case where one of the parents is already in the mempool (in
> this
> case, P2), which means we must allow P3 to have new unconfirmed inputs.
> However,
> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace
> M1
> with this package.
>
> Thus, the package RBF rule regarding new unconfirmed inputs is less strict
> than
> BIP125#2. However, we still achieve the same goal of requiring the
> replacement
> transactions to have a ancestor score at least as high as the original
> ones. As
> a result, the entire package is required to be a higher feerate mining
> candidate
> than each of the replaced transactions.
>
> Another note: the [comment][13] above the BIP125#2 code in the original RBF
> implementation suggests that the rule was intended to be temporary.
>
> ##### Absolute Fee (Rule #3)
>
> The package must increase the absolute fee of the mempool, i.e. the total
> fees
> of the package must be higher than the absolute fees of the mempool
> transactions
> it replaces. Combined with the CPFP rule above, this differs from BIP125
> Rule #3
> - an individual transaction in the package may have lower fees than the
>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
> child
> pays for RBF.
>
> ##### Feerate (Rule #4)
>
> The package must pay for its own bandwidth; the package feerate must be
> higher
> than the replaced transactions by at least minimum relay feerate
> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
> from
> BIP125 Rule #4 - an individual transaction in the package can have a lower
> feerate than the transaction(s) it is replacing. In fact, it may have 0
> fees,
> and the child pays for RBF.
>
> ##### Total Number of Replaced Transactions (Rule #5)
>
> The package cannot replace more than 100 mempool transactions. This is
> identical
> to BIP125 Rule #5.
>
> ### Expected FAQs
>
> 1. Is it possible for only some of the package to make it into the mempool?
>
>    Yes, it is. However, since we evict transactions from the mempool by
> descendant score and the package child is supposed to be sponsoring the
> fees of
> its parents, the most common scenario would be all-or-nothing. This is
> incentive-compatible. In fact, to be conservative, package validation
> should
> begin by trying to submit all of the transactions individually, and only
> use the
> package mempool acceptance logic if the parents fail due to low feerate.
>
> 2. Should we allow packages to contain already-confirmed transactions?
>
>     No, for practical reasons. In mempool validation, we actually aren't
> able to
> tell with 100% confidence if we are looking at a transaction that has
> already
> confirmed, because we look up inputs using a UTXO set. If we have
> historical
> block data, it's possible to look for it, but this is inefficient, not
> always
> possible for pruning nodes, and unnecessary because we're not going to do
> anything with the transaction anyway. As such, we already have the
> expectation
> that transaction relay is somewhat "stateful" i.e. nobody should be
> relaying
> transactions that have already been confirmed. Similarly, we shouldn't be
> relaying packages that contain already-confirmed transactions.
>
> [1]: https://github.com/bitcoin/bitcoin/pull/22290
> [2]:
> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
> [3]:
> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
> [4]: https://github.com/bitcoin/bitcoin/pull/16400
> [5]: https://github.com/bitcoin/bitcoin/pull/21062
> [6]: https://github.com/bitcoin/bitcoin/pull/22675
> [7]: https://github.com/bitcoin/bitcoin/pull/22796
> [8]: https://github.com/bitcoin/bitcoin/pull/20833
> [9]: https://github.com/bitcoin/bitcoin/pull/21800
> [10]: https://github.com/bitcoin/bitcoin/pull/16401
> [11]: https://github.com/bitcoin/bitcoin/pull/19621
> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
> [13]:
> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
> [14]:
> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
> [15]:
> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
> [16]:
> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
> [17]:
> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
> [18]:
> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
> [19]:
> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
> [20]:
> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 32698 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-16  7:51 [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF Gloria Zhao
  2021-09-19 23:16 ` Antoine Riard
@ 2021-09-20  9:19 ` Bastien TEINTURIER
  2021-09-21 11:18   ` Gloria Zhao
  1 sibling, 1 reply; 16+ messages in thread
From: Bastien TEINTURIER @ 2021-09-20  9:19 UTC (permalink / raw)
  To: Gloria Zhao, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 23153 bytes --]

Hi Gloria,

Thanks for this detailed post!

The illustrations you provided are very useful for this kind of graph
topology problems.

The rules you lay out for package RBF look good to me at first glance
as there are some subtle improvements compared to BIP 125.

> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]

I have a question regarding this rule, as your example 2C could be
concerning for LN (unless I didn't understand it correctly).

This also touches on the package RBF rule 5 ("The package cannot
replace more than 100 mempool transactions.")

In your example we have a parent transaction A already in the mempool
and an unrelated child B. We submit a package C + D where C spends
another of A's inputs. You're highlighting that this package may be
rejected because of the unrelated transaction(s) B.

The way I see this, an attacker can abuse this rule to ensure
transaction A stays pinned in the mempool without confirming by
broadcasting a set of child transactions that reach these limits
and pay low fees (where A would be a commit tx in LN).

We had to create the CPFP carve-out rule explicitly to work around
this limitation, and I think it would be necessary for package RBF
as well, because in such cases we do want to be able to submit a
package A + C where C pays high fees to speed up A's confirmation,
regardless of unrelated unconfirmed children of A...

We could submit only C to benefit from the existing CPFP carve-out
rule, but that wouldn't work if our local mempool doesn't have A yet,
but other remote mempools do.

Is my concern justified? Is this something that we should dig into a
bit deeper?

Thanks,
Bastien

Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> a écrit :

> Hi there,
>
> I'm writing to propose a set of mempool policy changes to enable package
> validation (in preparation for package relay) in Bitcoin Core. These would
> not
> be consensus or P2P protocol changes. However, since mempool policy
> significantly affects transaction propagation, I believe this is relevant
> for
> the mailing list.
>
> My proposal enables packages consisting of multiple parents and 1 child.
> If you
> develop software that relies on specific transaction relay assumptions
> and/or
> are interested in using package relay in the future, I'm very interested
> to hear
> your feedback on the utility or restrictiveness of these package policies
> for
> your use cases.
>
> A draft implementation of this proposal can be found in [Bitcoin Core
> PR#22290][1].
>
> An illustrated version of this post can be found at
> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
> I have also linked the images below.
>
> ## Background
>
> Feel free to skip this section if you are already familiar with mempool
> policy
> and package relay terminology.
>
> ### Terminology Clarifications
>
> * Package = an ordered list of related transactions, representable by a
> Directed
>   Acyclic Graph.
> * Package Feerate = the total modified fees divided by the total virtual
> size of
>   all transactions in the package.
>     - Modified fees = a transaction's base fees + fee delta applied by the
> user
>       with `prioritisetransaction`. As such, we expect this to vary across
> mempools.
>     - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
> Core][3].
>     - Note that feerate is not necessarily based on the base fees and
> serialized
>       size.
>
> * Fee-Bumping = user/wallet actions that take advantage of miner
> incentives to
>   boost a transaction's candidacy for inclusion in a block, including
> Child Pays
> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
> mempool policy is to recognize when the new transaction is more economical
> to
> mine than the original one(s) but not open DoS vectors, so there are some
> limitations.
>
> ### Policy
>
> The purpose of the mempool is to store the best (to be most
> incentive-compatible
> with miners, highest feerate) candidates for inclusion in a block. Miners
> use
> the mempool to build block templates. The mempool is also useful as a
> cache for
> boosting block relay and validation performance, aiding transaction relay,
> and
> generating feerate estimations.
>
> Ideally, all consensus-valid transactions paying reasonable fees should
> make it
> to miners through normal transaction relay, without any special
> connectivity or
> relationships with miners. On the other hand, nodes do not have unlimited
> resources, and a P2P network designed to let any honest node broadcast
> their
> transactions also exposes the transaction validation engine to DoS attacks
> from
> malicious peers.
>
> As such, for unconfirmed transactions we are considering for our mempool,
> we
> apply a set of validation rules in addition to consensus, primarily to
> protect
> us from resource exhaustion and aid our efforts to keep the highest fee
> transactions. We call this mempool _policy_: a set of (configurable,
> node-specific) rules that transactions must abide by in order to be
> accepted
> into our mempool. Transaction "Standardness" rules and mempool
> restrictions such
> as "too-long-mempool-chain" are both examples of policy.
>
> ### Package Relay and Package Mempool Accept
>
> In transaction relay, we currently consider transactions one at a time for
> submission to the mempool. This creates a limitation in the node's ability
> to
> determine which transactions have the highest feerates, since we cannot
> take
> into account descendants (i.e. cannot use CPFP) until all the transactions
> are
> in the mempool. Similarly, we cannot use a transaction's descendants when
> considering it for RBF. When an individual transaction does not meet the
> mempool
> minimum feerate and the user isn't able to create a replacement transaction
> directly, it will not be accepted by mempools.
>
> This limitation presents a security issue for applications and users
> relying on
> time-sensitive transactions. For example, Lightning and other protocols
> create
> UTXOs with multiple spending paths, where one counterparty's spending path
> opens
> up after a timelock, and users are protected from cheating scenarios as
> long as
> they redeem on-chain in time. A key security assumption is that all
> parties'
> transactions will propagate and confirm in a timely manner. This
> assumption can
> be broken if fee-bumping does not work as intended.
>
> The end goal for Package Relay is to consider multiple transactions at the
> same
> time, e.g. a transaction with its high-fee child. This may help us better
> determine whether transactions should be accepted to our mempool,
> especially if
> they don't meet fee requirements individually or are better RBF candidates
> as a
> package. A combination of changes to mempool validation logic, policy, and
> transaction relay allows us to better propagate the transactions with the
> highest package feerates to miners, and makes fee-bumping tools more
> powerful
> for users.
>
> The "relay" part of Package Relay suggests P2P messaging changes, but a
> large
> part of the changes are in the mempool's package validation logic. We call
> this
> *Package Mempool Accept*.
>
> ### Previous Work
>
> * Given that mempool validation is DoS-sensitive and complex, it would be
>   dangerous to haphazardly tack on package validation logic. Many efforts
> have
> been made to make mempool validation less opaque (see [#16400][4],
> [#21062][5],
> [#22675][6], [#22796][7]).
> * [#20833][8] Added basic capabilities for package validation, test
> accepts only
>   (no submission to mempool).
> * [#21800][9] Implemented package ancestor/descendant limit checks for
> arbitrary
>   packages. Still test accepts only.
> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>
> ### Existing Package Rules
>
> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
> consider
> them as "given" in the rest of this document, though they can be changed,
> since
> package validation is test-accept only right now.
>
> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
>    *Rationale*: This is already enforced as mempool ancestor/descendant
> limits.
> Presumably, transactions in a package are all related, so exceeding this
> limit
> would mean that the package can either be split up or it wouldn't pass this
> mempool policy.
>
> 2. Packages must be topologically sorted: if any dependencies exist between
> transactions, parents must appear somewhere before children. [8]
>
> 3. A package cannot have conflicting transactions, i.e. none of them can
> spend
> the same inputs. This also means there cannot be duplicate transactions.
> [8]
>
> 4. When packages are evaluated against ancestor/descendant limits in a test
> accept, the union of all of their descendants and ancestors is considered.
> This
> is essentially a "worst case" heuristic where every transaction in the
> package
> is treated as each other's ancestor and descendant. [8]
> Packages for which ancestor/descendant limits are accurately captured by
> this
> heuristic: [19]
>
> There are also limitations such as the fact that CPFP carve out is not
> applied
> to package transactions. #20833 also disables RBF in package validation;
> this
> proposal overrides that to allow packages to use RBF.
>
> ## Proposed Changes
>
> The next step in the Package Mempool Accept project is to implement
> submission
> to mempool, initially through RPC only. This allows us to test the
> submission
> logic before exposing it on P2P.
>
> ### Summary
>
> - Packages may contain already-in-mempool transactions.
> - Packages are 2 generations, Multi-Parent-1-Child.
> - Fee-related checks use the package feerate. This means that wallets can
> create a package that utilizes CPFP.
> - Parents are allowed to RBF mempool transactions with a set of rules
> similar
>   to BIP125. This enables a combination of CPFP and RBF, where a
> transaction's descendant fees pay for replacing mempool conflicts.
>
> There is a draft implementation in [#22290][1]. It is WIP, but feedback is
> always welcome.
>
> ### Details
>
> #### Packages May Contain Already-in-Mempool Transactions
>
> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.
>
> *Rationale*: Mempools vary across the network. It's possible for a parent
> to be
> accepted to the mempool of a peer on its own due to differences in policy
> and
> fee market fluctuations. We should not reject or penalize the entire
> package for
> an individual transaction as that could be a censorship vector.
>
> #### Packages Are Multi-Parent-1-Child
>
> Only packages of a specific topology are permitted. Namely, a package is
> exactly
> 1 child with all of its unconfirmed parents. After deduplication, the
> package
> may be exactly the same, empty, 1 child, 1 child with just some of its
> unconfirmed parents, etc. Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.
>
> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple parents
> makes it possible to fee-bump a batch of transactions. Restricting
> packages to a
> defined topology is also easier to reason about and simplifies the
> validation
> logic greatly. Multi-parent-1-child allows us to think of the package as
> one big
> transaction, where:
>
> - Inputs = all the inputs of parents + inputs of the child that come from
>   confirmed UTXOs
> - Outputs = all the outputs of the child + all outputs of the parents that
>   aren't spent by other transactions in the package
>
> Examples of packages that follow this rule (variations of example A show
> some
> possibilities after deduplication): ![image][15]
>
> #### Fee-Related Checks Use Package Feerate
>
> Package Feerate = the total modified fees divided by the total virtual
> size of
> all transactions in the package.
>
> To meet the two feerate requirements of a mempool, i.e., the pre-configured
> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
> feerate, the
> total package feerate is used instead of the individual feerate. The
> individual
> transactions are allowed to be below feerate requirements if the package
> meets
> the feerate requirements. For example, the parent(s) in the package can
> have 0
> fees but be paid for by the child.
>
> *Rationale*: This can be thought of as "CPFP within a package," solving the
> issue of a parent not meeting minimum fees on its own. This allows L2
> applications to adjust their fees at broadcast time instead of
> overshooting or
> risking getting stuck/pinned.
>
> We use the package feerate of the package *after deduplication*.
>
> *Rationale*:  It would be incorrect to use the fees of transactions that
> are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.
>
> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1, but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not be
> sufficient in this example, since the 300sat looks like enough for the
> 300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
> have a
> size of 100vB.
>
> #### Package RBF
>
> If a package meets feerate requirements as a package, the parents in the
> transaction are allowed to replace-by-fee mempool transactions. The child
> cannot
> replace mempool transactions. Multiple transactions can replace the same
> transaction, but in order to be valid, none of the transactions can try to
> replace an ancestor of another transaction in the same package (which
> would thus
> make its inputs unavailable).
>
> *Rationale*: Even if we are using package feerate, a package will not
> propagate
> as intended if RBF still requires each individual transaction to meet the
> feerate requirements.
>
> We use a set of rules slightly modified from BIP125 as follows:
>
> ##### Signaling (Rule #1)
>
> All mempool transactions to be replaced must signal replaceability.
>
> *Rationale*: Package RBF signaling logic should be the same for package
> RBF and
> single transaction acceptance. This would be updated if single transaction
> validation moves to full RBF.
>
> ##### New Unconfirmed Inputs (Rule #2)
>
> A package may include new unconfirmed inputs, but the ancestor feerate of
> the
> child must be at least as high as the ancestor feerates of every
> transaction
> being replaced. This is contrary to BIP125#2, which states "The replacement
> transaction may only include an unconfirmed input if that input was
> included in
> one of the original transactions. (An unconfirmed input spends an output
> from a
> currently-unconfirmed transaction.)"
>
> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
> transaction has a higher ancestor score than the original transaction(s)
> (see
> [comment][13]). Example H [16] shows how adding a new unconfirmed input
> can lower the
> ancestor score of the replacement transaction. P1 is trying to replace M1,
> and
> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and M2
> pays
> 100sat. Assume all transactions have a size of 100vB. While, in isolation,
> P1
> looks like a better mining candidate than M1, it must be mined with M2, so
> its
> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
> feerate, which is 6sat/vB.
>
> In package RBF, the rule analogous to BIP125#2 would be "none of the
> transactions in the package can spend new unconfirmed inputs." Example J
> [17] shows
> why, if any of the package transactions have ancestors, package feerate is
> no
> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
> the
> replacement transaction in an RBF), we're actually interested in the entire
> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
> P2, and
> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to only
> allow
> the child to have new unconfirmed inputs, either, because it can still
> cause us
> to overestimate the package's ancestor score.
>
> However, enforcing a rule analogous to BIP125#2 would not only make
> Package RBF
> less useful, but would also break Package RBF for packages with parents
> already
> in the mempool: if a package parent has already been submitted, it would
> look
> like the child is spending a "new" unconfirmed input. In example K [18],
> we're
> looking to replace M1 with the entire package including P1, P2, and P3. We
> must
> consider the case where one of the parents is already in the mempool (in
> this
> case, P2), which means we must allow P3 to have new unconfirmed inputs.
> However,
> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace
> M1
> with this package.
>
> Thus, the package RBF rule regarding new unconfirmed inputs is less strict
> than
> BIP125#2. However, we still achieve the same goal of requiring the
> replacement
> transactions to have a ancestor score at least as high as the original
> ones. As
> a result, the entire package is required to be a higher feerate mining
> candidate
> than each of the replaced transactions.
>
> Another note: the [comment][13] above the BIP125#2 code in the original RBF
> implementation suggests that the rule was intended to be temporary.
>
> ##### Absolute Fee (Rule #3)
>
> The package must increase the absolute fee of the mempool, i.e. the total
> fees
> of the package must be higher than the absolute fees of the mempool
> transactions
> it replaces. Combined with the CPFP rule above, this differs from BIP125
> Rule #3
> - an individual transaction in the package may have lower fees than the
>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
> child
> pays for RBF.
>
> ##### Feerate (Rule #4)
>
> The package must pay for its own bandwidth; the package feerate must be
> higher
> than the replaced transactions by at least minimum relay feerate
> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
> from
> BIP125 Rule #4 - an individual transaction in the package can have a lower
> feerate than the transaction(s) it is replacing. In fact, it may have 0
> fees,
> and the child pays for RBF.
>
> ##### Total Number of Replaced Transactions (Rule #5)
>
> The package cannot replace more than 100 mempool transactions. This is
> identical
> to BIP125 Rule #5.
>
> ### Expected FAQs
>
> 1. Is it possible for only some of the package to make it into the mempool?
>
>    Yes, it is. However, since we evict transactions from the mempool by
> descendant score and the package child is supposed to be sponsoring the
> fees of
> its parents, the most common scenario would be all-or-nothing. This is
> incentive-compatible. In fact, to be conservative, package validation
> should
> begin by trying to submit all of the transactions individually, and only
> use the
> package mempool acceptance logic if the parents fail due to low feerate.
>
> 2. Should we allow packages to contain already-confirmed transactions?
>
>     No, for practical reasons. In mempool validation, we actually aren't
> able to
> tell with 100% confidence if we are looking at a transaction that has
> already
> confirmed, because we look up inputs using a UTXO set. If we have
> historical
> block data, it's possible to look for it, but this is inefficient, not
> always
> possible for pruning nodes, and unnecessary because we're not going to do
> anything with the transaction anyway. As such, we already have the
> expectation
> that transaction relay is somewhat "stateful" i.e. nobody should be
> relaying
> transactions that have already been confirmed. Similarly, we shouldn't be
> relaying packages that contain already-confirmed transactions.
>
> [1]: https://github.com/bitcoin/bitcoin/pull/22290
> [2]:
> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
> [3]:
> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
> [4]: https://github.com/bitcoin/bitcoin/pull/16400
> [5]: https://github.com/bitcoin/bitcoin/pull/21062
> [6]: https://github.com/bitcoin/bitcoin/pull/22675
> [7]: https://github.com/bitcoin/bitcoin/pull/22796
> [8]: https://github.com/bitcoin/bitcoin/pull/20833
> [9]: https://github.com/bitcoin/bitcoin/pull/21800
> [10]: https://github.com/bitcoin/bitcoin/pull/16401
> [11]: https://github.com/bitcoin/bitcoin/pull/19621
> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
> [13]:
> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
> [14]:
> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
> [15]:
> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
> [16]:
> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
> [17]:
> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
> [18]:
> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
> [19]:
> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
> [20]:
> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 25964 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-19 23:16 ` Antoine Riard
@ 2021-09-20 15:10   ` Gloria Zhao
  2021-09-23  4:29     ` Antoine Riard
  0 siblings, 1 reply; 16+ messages in thread
From: Gloria Zhao @ 2021-09-20 15:10 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 38910 bytes --]

Hi Antoine,

First of all, thank you for the thorough review. I appreciate your insight
on LN requirements.

> IIUC, you have a package A+B+C submitted for acceptance and A is already
in your mempool. You trim out A from the package and then evaluate B+C.

> I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?

Correct, if B+C is too low feerate to be accepted, we will reject it. I
prefer this because it is incentive compatible: A can be mined by itself,
so there's no reason to prefer A+B+C instead of A.
As another way of looking at this, consider the case where we do accept
A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
capacity, we evict the lowest descendant feerate transactions, which are
B+C in this case. This gives us the same resulting mempool, with A and not
B+C.


> Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.

I agree that variations of the same package with different witnesses is a
case that must be handled. I consider witness replacement to be a project
that can be done in parallel to package mempool acceptance because being
able to accept packages does not worsen the problem of a
same-txid-different-witness "pinning" attack.

If or when we have witness replacement, the logic is: if the individual
transaction is enough to replace the mempool one, the replacement will
happen during the preceding individual transaction acceptance, and
deduplication logic will work. Otherwise, we will try to deduplicate by
wtxid, see that we need a package witness replacement, and use the package
feerate to evaluate whether this is economically rational.

See the #22290 "handle package transactions already in mempool" commit (
https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
which handles the case of same-txid-different-witness by simply using the
transaction in the mempool for now, with TODOs for what I just described.


> I'm not clearly understanding the accepted topologies. By "parent and
child to share a parent", do you mean the set of transactions A, B, C,
where B is spending A and C is spending A and B would be correct ?

Yes, that is what I meant. Yes, that would a valid package under these
rules.

> If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?

No, there is no limit on connectivity other than "child with all
unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
in-mempool + in-package ancestor limits.


> Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.

> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.

Please note that A may replace A' even if A' has higher fees than A
individually, because the proposed package RBF utilizes the fees and size
of the entire package. This just requires E to pay enough fees, although
this can be pretty high if there are also potential B' and C' competing
commitment transactions that we don't know about.


> IMHO, I'm leaning towards deploying during a first phase
1-parent/1-child. I think it's the most conservative step still improving
second-layer safety.

So far, my understanding is that multi-parent-1-child is desired for
batched fee-bumping (
https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
I've also seen your response which I have less context on (
https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That
being said, I am happy to create a new proposal for 1 parent + 1 child
(which would be slightly simpler) and plan for moving to
multi-parent-1-child later if that is preferred. I am very interested in
hearing feedback on that approach.


> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?

I think there is a misunderstanding here - let me describe what I'm
proposing we'd do in this situation: we'll try individual submission for A,
see that it fails due to "insufficient fees." Then, we'll try package
validation for A+B and use package RBF. If A+B pays enough, it can still
replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
this meet your expectations?


> What problem are you trying to solve by the package feerate *after* dedup
rule ?
> My understanding is that an in-package transaction might be already in
the mempool. Therefore, to compute a correct RBF penalty replacement, the
vsize of this transaction could be discarded lowering the cost of package
RBF.

I'm proposing that, when a transaction has already been submitted to
mempool, we would ignore both its fees and vsize when calculating package
feerate. In example G2, we shouldn't count M1 fees after its submission to
mempool, since M1's fees have already been used to pay for its individual
bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
We also shouldn't count its vsize, since it has already been paid for.


> I think this is a footgunish API, as if a package issuer send the
multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
Then try to broadcast the higher-feerate C'+D' package, it should be
rejected. So it's breaking the naive broadcaster assumption that a
higher-feerate/higher-fee package always replaces ?

Note that, if C' conflicts with C, it also conflicts with D, since D is a
descendant of C and would thus need to be evicted along with it.
Implicitly, D' would not be in conflict with D.
More generally, this example is surprising to me because I didn't think
packages would be used to fee-bump replaceable transactions. Do we want the
child to be able to replace mempool transactions as well? This can be
implemented with a bit of additional logic.

> I think this is unsafe for L2s if counterparties have malleability of the
child transaction. They can block your package replacement by opting-out
from RBF signaling. IIRC, LN's "anchor output" presents such an ability.

I'm not sure what you mean? Let's say we have a package of parent A + child
B, where A is supposed to replace a mempool transaction A'. Are you saying
that counterparties are able to malleate the package child B, or a child of
A'? If they can malleate a child of A', that shouldn't matter as long as A'
is signaling replacement. This would be handled identically with full RBF
and what Core currently implements.

> I think this is an issue brought by the trimming during the dedup phase.
If we preserve the package integrity, only re-using the tx-level checks
results of already in-mempool transactions to gain in CPU time we won't
have this issue. Package childs can add unconfirmed inputs as long as
they're in-package, the bip125 rule2 is only evaluated against parents ?

Sorry, I don't understand what you mean by "preserve the package
integrity?" Could you elaborate?

> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
sat/vb for 1000 vbytes.

> Package A + B ancestor score is 10 sat/vb.

> D has a higher feerate/absolute fee than B.

> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)

I am in agreement with your calculations but unsure if we disagree on the
expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
it fails the proposed package RBF Rule #2, so this package would be
rejected. Does this meet your expectations?

Thank you for linking to projects that might be interested in package relay
:)

Thanks,
Gloria

On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail•com>
wrote:

> Hi Gloria,
>
> > A package may contain transactions that are already in the mempool. We
> > remove
> > ("deduplicate") those transactions from the package for the purposes of
> > package
> > mempool acceptance. If a package is empty after deduplication, we do
> > nothing.
>
> IIUC, you have a package A+B+C submitted for acceptance and A is already
> in your mempool. You trim out A from the package and then evaluate B+C.
>
> I think this might be an issue if A is the higher-fee element of the ABC
> package. B+C package fees might be under the mempool min fee and will be
> rejected, potentially breaking the acceptance expectations of the package
> issuer ?
>
> Further, I think the dedup should be done on wtxid, as you might have
> multiple valid witnesses. Though with varying vsizes and as such offering
> different feerates.
>
> E.g you're going to evaluate the package A+B and A' is already in your
> mempool with a bigger valid witness. You trim A based on txid, then you
> evaluate A'+B, which fails the fee checks. However, evaluating A+B would
> have been a success.
>
> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to avoid
> repeated signatures verification and parent UTXOs fetches ? Can we achieve
> the same goal by bypassing tx-level checks for already-in txn while
> conserving the package integrity for package-level checks ?
>
> > Note that it's possible for the parents to be
> > indirect
> > descendants/ancestors of one another, or for parent and child to share a
> > parent,
> > so we cannot make any other topology assumptions.
>
> I'm not clearly understanding the accepted topologies. By "parent and
> child to share a parent", do you mean the set of transactions A, B, C,
> where B is spending A and C is spending A and B would be correct ?
>
> If yes, is there a width-limit introduced or we fallback on
> MAX_PACKAGE_COUNT=25 ?
>
> IIRC, one rationale to come with this topology limitation was to lower the
> DoS risks when potentially deploying p2p packages.
>
> Considering the current Core's mempool acceptance rules, I think CPFP
> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
> jamming successful on one channel commitment transaction would contamine
> the remaining commitments sharing the same package.
>
> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
> transactions and E a shared CPFP. If a malicious A' transaction has a
> better feerate than A, the whole package acceptance will fail. Even if A'
> confirms in the following block,
> the propagation and confirmation of B+C+D have been delayed. This could
> carry on a loss of funds.
>
> That said, if you're broadcasting commitment transactions without
> time-sensitive HTLC outputs, I think the batching is effectively a fee
> saving as you don't have to duplicate the CPFP.
>
> IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child.
> I think it's the most conservative step still improving second-layer safety.
>
> > *Rationale*:  It would be incorrect to use the fees of transactions that
> are
> > already in the mempool, as we do not want a transaction's fees to be
> > double-counted for both its individual RBF and package RBF.
>
> I'm unsure about the logical order of the checks proposed.
>
> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
> fails. For this reason I think the individual RBF should be bypassed and
> only the package RBF apply ?
>
> Note this situation is plausible, with current LN design, your
> counterparty can have a commitment transaction with a better fee just by
> selecting a higher `dust_limit_satoshis` than yours.
>
> > Examples F and G [14] show the same package, but P1 is submitted
> > individually before
> > the package in example G. In example F, we can see that the 300vB package
> > pays
> > an additional 200sat in fees, which is not enough to pay for its own
> > bandwidth
> > (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
> but
> > using P1's fees again during package submission would make it look like a
> > 300sat
> > increase for a 200vB package. Even including its fees and size would not
> be
> > sufficient in this example, since the 300sat looks like enough for the
> 300vB
> > package. The calculcation after deduplication is 100sat increase for a
> > package
> > of size 200vB, which correctly fails BIP125#4. Assume all transactions
> have
> > a
> > size of 100vB.
>
> What problem are you trying to solve by the package feerate *after* dedup
> rule ?
>
> My understanding is that an in-package transaction might be already in the
> mempool. Therefore, to compute a correct RBF penalty replacement, the vsize
> of this transaction could be discarded lowering the cost of package RBF.
>
> If we keep a "safe" dedup mechanism (see my point above), I think this
> discount is justified, as the validation cost of node operators is paid for
> ?
>
> > The child cannot replace mempool transactions.
>
> Let's say you issue package A+B, then package C+B', where B' is a child of
> both A and C. This rule fails the acceptance of C+B' ?
>
> I think this is a footgunish API, as if a package issuer send the
> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
> Then try to broadcast the higher-feerate C'+D' package, it should be
> rejected. So it's breaking the naive broadcaster assumption that a
> higher-feerate/higher-fee package always replaces ? And it might be unsafe
> in protocols where states are symmetric. E.g a malicious counterparty
> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
> fees.
>
> > All mempool transactions to be replaced must signal replaceability.
>
> I think this is unsafe for L2s if counterparties have malleability of the
> child transaction. They can block your package replacement by opting-out
> from RBF signaling. IIRC, LN's "anchor output" presents such an ability.
>
> I think it's better to either fix inherited signaling or move towards
> full-rbf.
>
> > if a package parent has already been submitted, it would
> > look
> >like the child is spending a "new" unconfirmed input.
>
> I think this is an issue brought by the trimming during the dedup phase.
> If we preserve the package integrity, only re-using the tx-level checks
> results of already in-mempool transactions to gain in CPU time we won't
> have this issue. Package childs can add unconfirmed inputs as long as
> they're in-package, the bip125 rule2 is only evaluated against parents ?
>
> > However, we still achieve the same goal of requiring the
> > replacement
> > transactions to have a ancestor score at least as high as the original
> > ones.
>
> I'm not sure if this holds...
>
> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
> sat/vb for 1000 vbytes.
>
> Package A + B ancestor score is 10 sat/vb.
>
> D has a higher feerate/absolute fee than B.
>
> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
> sats + D's 1500 sats) /
> A's 100 vb + C's 1000 vb + D's 100 vb)
>
> Overall, this is a review through the lenses of LN requirements. I think
> other L2 protocols/applications
> could be candidates to using package accept/relay such as:
> * https://github.com/lightninglabs/pool
> * https://github.com/discreetlogcontracts/dlcspecs
> * https://github.com/bitcoin-teleport/teleport-transactions/
> * https://github.com/sapio-lang/sapio
> * https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
> * https://github.com/revault/practical-revault
>
> Thanks for rolling forward the ball on this subject.
>
> Antoine
>
> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi there,
>>
>> I'm writing to propose a set of mempool policy changes to enable package
>> validation (in preparation for package relay) in Bitcoin Core. These
>> would not
>> be consensus or P2P protocol changes. However, since mempool policy
>> significantly affects transaction propagation, I believe this is relevant
>> for
>> the mailing list.
>>
>> My proposal enables packages consisting of multiple parents and 1 child.
>> If you
>> develop software that relies on specific transaction relay assumptions
>> and/or
>> are interested in using package relay in the future, I'm very interested
>> to hear
>> your feedback on the utility or restrictiveness of these package policies
>> for
>> your use cases.
>>
>> A draft implementation of this proposal can be found in [Bitcoin Core
>> PR#22290][1].
>>
>> An illustrated version of this post can be found at
>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>> I have also linked the images below.
>>
>> ## Background
>>
>> Feel free to skip this section if you are already familiar with mempool
>> policy
>> and package relay terminology.
>>
>> ### Terminology Clarifications
>>
>> * Package = an ordered list of related transactions, representable by a
>> Directed
>>   Acyclic Graph.
>> * Package Feerate = the total modified fees divided by the total virtual
>> size of
>>   all transactions in the package.
>>     - Modified fees = a transaction's base fees + fee delta applied by
>> the user
>>       with `prioritisetransaction`. As such, we expect this to vary across
>> mempools.
>>     - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>> Core][3].
>>     - Note that feerate is not necessarily based on the base fees and
>> serialized
>>       size.
>>
>> * Fee-Bumping = user/wallet actions that take advantage of miner
>> incentives to
>>   boost a transaction's candidacy for inclusion in a block, including
>> Child Pays
>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
>> mempool policy is to recognize when the new transaction is more
>> economical to
>> mine than the original one(s) but not open DoS vectors, so there are some
>> limitations.
>>
>> ### Policy
>>
>> The purpose of the mempool is to store the best (to be most
>> incentive-compatible
>> with miners, highest feerate) candidates for inclusion in a block. Miners
>> use
>> the mempool to build block templates. The mempool is also useful as a
>> cache for
>> boosting block relay and validation performance, aiding transaction
>> relay, and
>> generating feerate estimations.
>>
>> Ideally, all consensus-valid transactions paying reasonable fees should
>> make it
>> to miners through normal transaction relay, without any special
>> connectivity or
>> relationships with miners. On the other hand, nodes do not have unlimited
>> resources, and a P2P network designed to let any honest node broadcast
>> their
>> transactions also exposes the transaction validation engine to DoS
>> attacks from
>> malicious peers.
>>
>> As such, for unconfirmed transactions we are considering for our mempool,
>> we
>> apply a set of validation rules in addition to consensus, primarily to
>> protect
>> us from resource exhaustion and aid our efforts to keep the highest fee
>> transactions. We call this mempool _policy_: a set of (configurable,
>> node-specific) rules that transactions must abide by in order to be
>> accepted
>> into our mempool. Transaction "Standardness" rules and mempool
>> restrictions such
>> as "too-long-mempool-chain" are both examples of policy.
>>
>> ### Package Relay and Package Mempool Accept
>>
>> In transaction relay, we currently consider transactions one at a time for
>> submission to the mempool. This creates a limitation in the node's
>> ability to
>> determine which transactions have the highest feerates, since we cannot
>> take
>> into account descendants (i.e. cannot use CPFP) until all the
>> transactions are
>> in the mempool. Similarly, we cannot use a transaction's descendants when
>> considering it for RBF. When an individual transaction does not meet the
>> mempool
>> minimum feerate and the user isn't able to create a replacement
>> transaction
>> directly, it will not be accepted by mempools.
>>
>> This limitation presents a security issue for applications and users
>> relying on
>> time-sensitive transactions. For example, Lightning and other protocols
>> create
>> UTXOs with multiple spending paths, where one counterparty's spending
>> path opens
>> up after a timelock, and users are protected from cheating scenarios as
>> long as
>> they redeem on-chain in time. A key security assumption is that all
>> parties'
>> transactions will propagate and confirm in a timely manner. This
>> assumption can
>> be broken if fee-bumping does not work as intended.
>>
>> The end goal for Package Relay is to consider multiple transactions at
>> the same
>> time, e.g. a transaction with its high-fee child. This may help us better
>> determine whether transactions should be accepted to our mempool,
>> especially if
>> they don't meet fee requirements individually or are better RBF
>> candidates as a
>> package. A combination of changes to mempool validation logic, policy, and
>> transaction relay allows us to better propagate the transactions with the
>> highest package feerates to miners, and makes fee-bumping tools more
>> powerful
>> for users.
>>
>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>> large
>> part of the changes are in the mempool's package validation logic. We
>> call this
>> *Package Mempool Accept*.
>>
>> ### Previous Work
>>
>> * Given that mempool validation is DoS-sensitive and complex, it would be
>>   dangerous to haphazardly tack on package validation logic. Many efforts
>> have
>> been made to make mempool validation less opaque (see [#16400][4],
>> [#21062][5],
>> [#22675][6], [#22796][7]).
>> * [#20833][8] Added basic capabilities for package validation, test
>> accepts only
>>   (no submission to mempool).
>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>> arbitrary
>>   packages. Still test accepts only.
>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>
>> ### Existing Package Rules
>>
>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>> consider
>> them as "given" in the rest of this document, though they can be changed,
>> since
>> package validation is test-accept only right now.
>>
>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>
>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>> limits.
>> Presumably, transactions in a package are all related, so exceeding this
>> limit
>> would mean that the package can either be split up or it wouldn't pass
>> this
>> mempool policy.
>>
>> 2. Packages must be topologically sorted: if any dependencies exist
>> between
>> transactions, parents must appear somewhere before children. [8]
>>
>> 3. A package cannot have conflicting transactions, i.e. none of them can
>> spend
>> the same inputs. This also means there cannot be duplicate transactions.
>> [8]
>>
>> 4. When packages are evaluated against ancestor/descendant limits in a
>> test
>> accept, the union of all of their descendants and ancestors is
>> considered. This
>> is essentially a "worst case" heuristic where every transaction in the
>> package
>> is treated as each other's ancestor and descendant. [8]
>> Packages for which ancestor/descendant limits are accurately captured by
>> this
>> heuristic: [19]
>>
>> There are also limitations such as the fact that CPFP carve out is not
>> applied
>> to package transactions. #20833 also disables RBF in package validation;
>> this
>> proposal overrides that to allow packages to use RBF.
>>
>> ## Proposed Changes
>>
>> The next step in the Package Mempool Accept project is to implement
>> submission
>> to mempool, initially through RPC only. This allows us to test the
>> submission
>> logic before exposing it on P2P.
>>
>> ### Summary
>>
>> - Packages may contain already-in-mempool transactions.
>> - Packages are 2 generations, Multi-Parent-1-Child.
>> - Fee-related checks use the package feerate. This means that wallets can
>> create a package that utilizes CPFP.
>> - Parents are allowed to RBF mempool transactions with a set of rules
>> similar
>>   to BIP125. This enables a combination of CPFP and RBF, where a
>> transaction's descendant fees pay for replacing mempool conflicts.
>>
>> There is a draft implementation in [#22290][1]. It is WIP, but feedback is
>> always welcome.
>>
>> ### Details
>>
>> #### Packages May Contain Already-in-Mempool Transactions
>>
>> A package may contain transactions that are already in the mempool. We
>> remove
>> ("deduplicate") those transactions from the package for the purposes of
>> package
>> mempool acceptance. If a package is empty after deduplication, we do
>> nothing.
>>
>> *Rationale*: Mempools vary across the network. It's possible for a parent
>> to be
>> accepted to the mempool of a peer on its own due to differences in policy
>> and
>> fee market fluctuations. We should not reject or penalize the entire
>> package for
>> an individual transaction as that could be a censorship vector.
>>
>> #### Packages Are Multi-Parent-1-Child
>>
>> Only packages of a specific topology are permitted. Namely, a package is
>> exactly
>> 1 child with all of its unconfirmed parents. After deduplication, the
>> package
>> may be exactly the same, empty, 1 child, 1 child with just some of its
>> unconfirmed parents, etc. Note that it's possible for the parents to be
>> indirect
>> descendants/ancestors of one another, or for parent and child to share a
>> parent,
>> so we cannot make any other topology assumptions.
>>
>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>> parents
>> makes it possible to fee-bump a batch of transactions. Restricting
>> packages to a
>> defined topology is also easier to reason about and simplifies the
>> validation
>> logic greatly. Multi-parent-1-child allows us to think of the package as
>> one big
>> transaction, where:
>>
>> - Inputs = all the inputs of parents + inputs of the child that come from
>>   confirmed UTXOs
>> - Outputs = all the outputs of the child + all outputs of the parents that
>>   aren't spent by other transactions in the package
>>
>> Examples of packages that follow this rule (variations of example A show
>> some
>> possibilities after deduplication): ![image][15]
>>
>> #### Fee-Related Checks Use Package Feerate
>>
>> Package Feerate = the total modified fees divided by the total virtual
>> size of
>> all transactions in the package.
>>
>> To meet the two feerate requirements of a mempool, i.e., the
>> pre-configured
>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>> feerate, the
>> total package feerate is used instead of the individual feerate. The
>> individual
>> transactions are allowed to be below feerate requirements if the package
>> meets
>> the feerate requirements. For example, the parent(s) in the package can
>> have 0
>> fees but be paid for by the child.
>>
>> *Rationale*: This can be thought of as "CPFP within a package," solving
>> the
>> issue of a parent not meeting minimum fees on its own. This allows L2
>> applications to adjust their fees at broadcast time instead of
>> overshooting or
>> risking getting stuck/pinned.
>>
>> We use the package feerate of the package *after deduplication*.
>>
>> *Rationale*:  It would be incorrect to use the fees of transactions that
>> are
>> already in the mempool, as we do not want a transaction's fees to be
>> double-counted for both its individual RBF and package RBF.
>>
>> Examples F and G [14] show the same package, but P1 is submitted
>> individually before
>> the package in example G. In example F, we can see that the 300vB package
>> pays
>> an additional 200sat in fees, which is not enough to pay for its own
>> bandwidth
>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>> but
>> using P1's fees again during package submission would make it look like a
>> 300sat
>> increase for a 200vB package. Even including its fees and size would not
>> be
>> sufficient in this example, since the 300sat looks like enough for the
>> 300vB
>> package. The calculcation after deduplication is 100sat increase for a
>> package
>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>> have a
>> size of 100vB.
>>
>> #### Package RBF
>>
>> If a package meets feerate requirements as a package, the parents in the
>> transaction are allowed to replace-by-fee mempool transactions. The child
>> cannot
>> replace mempool transactions. Multiple transactions can replace the same
>> transaction, but in order to be valid, none of the transactions can try to
>> replace an ancestor of another transaction in the same package (which
>> would thus
>> make its inputs unavailable).
>>
>> *Rationale*: Even if we are using package feerate, a package will not
>> propagate
>> as intended if RBF still requires each individual transaction to meet the
>> feerate requirements.
>>
>> We use a set of rules slightly modified from BIP125 as follows:
>>
>> ##### Signaling (Rule #1)
>>
>> All mempool transactions to be replaced must signal replaceability.
>>
>> *Rationale*: Package RBF signaling logic should be the same for package
>> RBF and
>> single transaction acceptance. This would be updated if single transaction
>> validation moves to full RBF.
>>
>> ##### New Unconfirmed Inputs (Rule #2)
>>
>> A package may include new unconfirmed inputs, but the ancestor feerate of
>> the
>> child must be at least as high as the ancestor feerates of every
>> transaction
>> being replaced. This is contrary to BIP125#2, which states "The
>> replacement
>> transaction may only include an unconfirmed input if that input was
>> included in
>> one of the original transactions. (An unconfirmed input spends an output
>> from a
>> currently-unconfirmed transaction.)"
>>
>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>> transaction has a higher ancestor score than the original transaction(s)
>> (see
>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>> can lower the
>> ancestor score of the replacement transaction. P1 is trying to replace
>> M1, and
>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>> M2 pays
>> 100sat. Assume all transactions have a size of 100vB. While, in
>> isolation, P1
>> looks like a better mining candidate than M1, it must be mined with M2,
>> so its
>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
>> feerate, which is 6sat/vB.
>>
>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>> transactions in the package can spend new unconfirmed inputs." Example J
>> [17] shows
>> why, if any of the package transactions have ancestors, package feerate
>> is no
>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
>> the
>> replacement transaction in an RBF), we're actually interested in the
>> entire
>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>> P2, and
>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>> only allow
>> the child to have new unconfirmed inputs, either, because it can still
>> cause us
>> to overestimate the package's ancestor score.
>>
>> However, enforcing a rule analogous to BIP125#2 would not only make
>> Package RBF
>> less useful, but would also break Package RBF for packages with parents
>> already
>> in the mempool: if a package parent has already been submitted, it would
>> look
>> like the child is spending a "new" unconfirmed input. In example K [18],
>> we're
>> looking to replace M1 with the entire package including P1, P2, and P3.
>> We must
>> consider the case where one of the parents is already in the mempool (in
>> this
>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>> However,
>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace
>> M1
>> with this package.
>>
>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>> strict than
>> BIP125#2. However, we still achieve the same goal of requiring the
>> replacement
>> transactions to have a ancestor score at least as high as the original
>> ones. As
>> a result, the entire package is required to be a higher feerate mining
>> candidate
>> than each of the replaced transactions.
>>
>> Another note: the [comment][13] above the BIP125#2 code in the original
>> RBF
>> implementation suggests that the rule was intended to be temporary.
>>
>> ##### Absolute Fee (Rule #3)
>>
>> The package must increase the absolute fee of the mempool, i.e. the total
>> fees
>> of the package must be higher than the absolute fees of the mempool
>> transactions
>> it replaces. Combined with the CPFP rule above, this differs from BIP125
>> Rule #3
>> - an individual transaction in the package may have lower fees than the
>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>> child
>> pays for RBF.
>>
>> ##### Feerate (Rule #4)
>>
>> The package must pay for its own bandwidth; the package feerate must be
>> higher
>> than the replaced transactions by at least minimum relay feerate
>> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
>> from
>> BIP125 Rule #4 - an individual transaction in the package can have a lower
>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>> fees,
>> and the child pays for RBF.
>>
>> ##### Total Number of Replaced Transactions (Rule #5)
>>
>> The package cannot replace more than 100 mempool transactions. This is
>> identical
>> to BIP125 Rule #5.
>>
>> ### Expected FAQs
>>
>> 1. Is it possible for only some of the package to make it into the
>> mempool?
>>
>>    Yes, it is. However, since we evict transactions from the mempool by
>> descendant score and the package child is supposed to be sponsoring the
>> fees of
>> its parents, the most common scenario would be all-or-nothing. This is
>> incentive-compatible. In fact, to be conservative, package validation
>> should
>> begin by trying to submit all of the transactions individually, and only
>> use the
>> package mempool acceptance logic if the parents fail due to low feerate.
>>
>> 2. Should we allow packages to contain already-confirmed transactions?
>>
>>     No, for practical reasons. In mempool validation, we actually aren't
>> able to
>> tell with 100% confidence if we are looking at a transaction that has
>> already
>> confirmed, because we look up inputs using a UTXO set. If we have
>> historical
>> block data, it's possible to look for it, but this is inefficient, not
>> always
>> possible for pruning nodes, and unnecessary because we're not going to do
>> anything with the transaction anyway. As such, we already have the
>> expectation
>> that transaction relay is somewhat "stateful" i.e. nobody should be
>> relaying
>> transactions that have already been confirmed. Similarly, we shouldn't be
>> relaying packages that contain already-confirmed transactions.
>>
>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>> [2]:
>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>> [3]:
>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>> [13]:
>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>> [14]:
>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>> [15]:
>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>> [16]:
>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>> [17]:
>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>> [18]:
>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>> [19]:
>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>> [20]:
>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>

[-- Attachment #2: Type: text/html, Size: 42716 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-20  9:19 ` Bastien TEINTURIER
@ 2021-09-21 11:18   ` Gloria Zhao
  2021-09-21 15:18     ` Bastien TEINTURIER
  0 siblings, 1 reply; 16+ messages in thread
From: Gloria Zhao @ 2021-09-21 11:18 UTC (permalink / raw)
  To: Bastien TEINTURIER; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 25954 bytes --]

Hi Bastien,

Thank you for your feedback!

> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.

> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).

I believe you are describing a pinning attack in which your adversarial
counterparty attempts to monopolize the mempool descendant limit of the
shared  transaction A in order to prevent you from submitting a fee-bumping
child C; I've tried to illustrate this as diagram A here:
https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
(please let me know if I'm misunderstanding).

I believe this attack is mitigated as long as we attempt to submit
transactions individually (and thus take advantage of CPFP carve out)
before attempting package validation. So, in scenario A2, even if the
mempool receives a package with A+C, it would deduplicate A, submit C as an
individual transaction, and allow it due to the CPFP carve out exemption. A
more general goal is: if a transaction would propagate successfully on its
own now, it should still propagate regardless of whether it is included in
a package. The best way to ensure this, as far as I can tell, is to always
try to submit them individually first.

I would note that this proposal doesn't accommodate something like diagram
B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
very low fees and is bumped by D). I don't think this is a use case since C
should be the one fee-bumping A, but since we're talking about limitations
around the CPFP carve out, this is it.

Let me know if this addresses your concerns?

Thanks,
Gloria

On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER <bastien@acinq•fr>
wrote:

> Hi Gloria,
>
> Thanks for this detailed post!
>
> The illustrations you provided are very useful for this kind of graph
> topology problems.
>
> The rules you lay out for package RBF look good to me at first glance
> as there are some subtle improvements compared to BIP 125.
>
> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>
> I have a question regarding this rule, as your example 2C could be
> concerning for LN (unless I didn't understand it correctly).
>
> This also touches on the package RBF rule 5 ("The package cannot
> replace more than 100 mempool transactions.")
>
> In your example we have a parent transaction A already in the mempool
> and an unrelated child B. We submit a package C + D where C spends
> another of A's inputs. You're highlighting that this package may be
> rejected because of the unrelated transaction(s) B.
>
> The way I see this, an attacker can abuse this rule to ensure
> transaction A stays pinned in the mempool without confirming by
> broadcasting a set of child transactions that reach these limits
> and pay low fees (where A would be a commit tx in LN).
>
> We had to create the CPFP carve-out rule explicitly to work around
> this limitation, and I think it would be necessary for package RBF
> as well, because in such cases we do want to be able to submit a
> package A + C where C pays high fees to speed up A's confirmation,
> regardless of unrelated unconfirmed children of A...
>
> We could submit only C to benefit from the existing CPFP carve-out
> rule, but that wouldn't work if our local mempool doesn't have A yet,
> but other remote mempools do.
>
> Is my concern justified? Is this something that we should dig into a
> bit deeper?
>
> Thanks,
> Bastien
>
> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi there,
>>
>> I'm writing to propose a set of mempool policy changes to enable package
>> validation (in preparation for package relay) in Bitcoin Core. These
>> would not
>> be consensus or P2P protocol changes. However, since mempool policy
>> significantly affects transaction propagation, I believe this is relevant
>> for
>> the mailing list.
>>
>> My proposal enables packages consisting of multiple parents and 1 child.
>> If you
>> develop software that relies on specific transaction relay assumptions
>> and/or
>> are interested in using package relay in the future, I'm very interested
>> to hear
>> your feedback on the utility or restrictiveness of these package policies
>> for
>> your use cases.
>>
>> A draft implementation of this proposal can be found in [Bitcoin Core
>> PR#22290][1].
>>
>> An illustrated version of this post can be found at
>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>> I have also linked the images below.
>>
>> ## Background
>>
>> Feel free to skip this section if you are already familiar with mempool
>> policy
>> and package relay terminology.
>>
>> ### Terminology Clarifications
>>
>> * Package = an ordered list of related transactions, representable by a
>> Directed
>>   Acyclic Graph.
>> * Package Feerate = the total modified fees divided by the total virtual
>> size of
>>   all transactions in the package.
>>     - Modified fees = a transaction's base fees + fee delta applied by
>> the user
>>       with `prioritisetransaction`. As such, we expect this to vary across
>> mempools.
>>     - Virtual Size = the maximum of virtual sizes calculated using [BIP141
>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>> Core][3].
>>     - Note that feerate is not necessarily based on the base fees and
>> serialized
>>       size.
>>
>> * Fee-Bumping = user/wallet actions that take advantage of miner
>> incentives to
>>   boost a transaction's candidacy for inclusion in a block, including
>> Child Pays
>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
>> mempool policy is to recognize when the new transaction is more
>> economical to
>> mine than the original one(s) but not open DoS vectors, so there are some
>> limitations.
>>
>> ### Policy
>>
>> The purpose of the mempool is to store the best (to be most
>> incentive-compatible
>> with miners, highest feerate) candidates for inclusion in a block. Miners
>> use
>> the mempool to build block templates. The mempool is also useful as a
>> cache for
>> boosting block relay and validation performance, aiding transaction
>> relay, and
>> generating feerate estimations.
>>
>> Ideally, all consensus-valid transactions paying reasonable fees should
>> make it
>> to miners through normal transaction relay, without any special
>> connectivity or
>> relationships with miners. On the other hand, nodes do not have unlimited
>> resources, and a P2P network designed to let any honest node broadcast
>> their
>> transactions also exposes the transaction validation engine to DoS
>> attacks from
>> malicious peers.
>>
>> As such, for unconfirmed transactions we are considering for our mempool,
>> we
>> apply a set of validation rules in addition to consensus, primarily to
>> protect
>> us from resource exhaustion and aid our efforts to keep the highest fee
>> transactions. We call this mempool _policy_: a set of (configurable,
>> node-specific) rules that transactions must abide by in order to be
>> accepted
>> into our mempool. Transaction "Standardness" rules and mempool
>> restrictions such
>> as "too-long-mempool-chain" are both examples of policy.
>>
>> ### Package Relay and Package Mempool Accept
>>
>> In transaction relay, we currently consider transactions one at a time for
>> submission to the mempool. This creates a limitation in the node's
>> ability to
>> determine which transactions have the highest feerates, since we cannot
>> take
>> into account descendants (i.e. cannot use CPFP) until all the
>> transactions are
>> in the mempool. Similarly, we cannot use a transaction's descendants when
>> considering it for RBF. When an individual transaction does not meet the
>> mempool
>> minimum feerate and the user isn't able to create a replacement
>> transaction
>> directly, it will not be accepted by mempools.
>>
>> This limitation presents a security issue for applications and users
>> relying on
>> time-sensitive transactions. For example, Lightning and other protocols
>> create
>> UTXOs with multiple spending paths, where one counterparty's spending
>> path opens
>> up after a timelock, and users are protected from cheating scenarios as
>> long as
>> they redeem on-chain in time. A key security assumption is that all
>> parties'
>> transactions will propagate and confirm in a timely manner. This
>> assumption can
>> be broken if fee-bumping does not work as intended.
>>
>> The end goal for Package Relay is to consider multiple transactions at
>> the same
>> time, e.g. a transaction with its high-fee child. This may help us better
>> determine whether transactions should be accepted to our mempool,
>> especially if
>> they don't meet fee requirements individually or are better RBF
>> candidates as a
>> package. A combination of changes to mempool validation logic, policy, and
>> transaction relay allows us to better propagate the transactions with the
>> highest package feerates to miners, and makes fee-bumping tools more
>> powerful
>> for users.
>>
>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>> large
>> part of the changes are in the mempool's package validation logic. We
>> call this
>> *Package Mempool Accept*.
>>
>> ### Previous Work
>>
>> * Given that mempool validation is DoS-sensitive and complex, it would be
>>   dangerous to haphazardly tack on package validation logic. Many efforts
>> have
>> been made to make mempool validation less opaque (see [#16400][4],
>> [#21062][5],
>> [#22675][6], [#22796][7]).
>> * [#20833][8] Added basic capabilities for package validation, test
>> accepts only
>>   (no submission to mempool).
>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>> arbitrary
>>   packages. Still test accepts only.
>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>
>> ### Existing Package Rules
>>
>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>> consider
>> them as "given" in the rest of this document, though they can be changed,
>> since
>> package validation is test-accept only right now.
>>
>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>
>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>> limits.
>> Presumably, transactions in a package are all related, so exceeding this
>> limit
>> would mean that the package can either be split up or it wouldn't pass
>> this
>> mempool policy.
>>
>> 2. Packages must be topologically sorted: if any dependencies exist
>> between
>> transactions, parents must appear somewhere before children. [8]
>>
>> 3. A package cannot have conflicting transactions, i.e. none of them can
>> spend
>> the same inputs. This also means there cannot be duplicate transactions.
>> [8]
>>
>> 4. When packages are evaluated against ancestor/descendant limits in a
>> test
>> accept, the union of all of their descendants and ancestors is
>> considered. This
>> is essentially a "worst case" heuristic where every transaction in the
>> package
>> is treated as each other's ancestor and descendant. [8]
>> Packages for which ancestor/descendant limits are accurately captured by
>> this
>> heuristic: [19]
>>
>> There are also limitations such as the fact that CPFP carve out is not
>> applied
>> to package transactions. #20833 also disables RBF in package validation;
>> this
>> proposal overrides that to allow packages to use RBF.
>>
>> ## Proposed Changes
>>
>> The next step in the Package Mempool Accept project is to implement
>> submission
>> to mempool, initially through RPC only. This allows us to test the
>> submission
>> logic before exposing it on P2P.
>>
>> ### Summary
>>
>> - Packages may contain already-in-mempool transactions.
>> - Packages are 2 generations, Multi-Parent-1-Child.
>> - Fee-related checks use the package feerate. This means that wallets can
>> create a package that utilizes CPFP.
>> - Parents are allowed to RBF mempool transactions with a set of rules
>> similar
>>   to BIP125. This enables a combination of CPFP and RBF, where a
>> transaction's descendant fees pay for replacing mempool conflicts.
>>
>> There is a draft implementation in [#22290][1]. It is WIP, but feedback is
>> always welcome.
>>
>> ### Details
>>
>> #### Packages May Contain Already-in-Mempool Transactions
>>
>> A package may contain transactions that are already in the mempool. We
>> remove
>> ("deduplicate") those transactions from the package for the purposes of
>> package
>> mempool acceptance. If a package is empty after deduplication, we do
>> nothing.
>>
>> *Rationale*: Mempools vary across the network. It's possible for a parent
>> to be
>> accepted to the mempool of a peer on its own due to differences in policy
>> and
>> fee market fluctuations. We should not reject or penalize the entire
>> package for
>> an individual transaction as that could be a censorship vector.
>>
>> #### Packages Are Multi-Parent-1-Child
>>
>> Only packages of a specific topology are permitted. Namely, a package is
>> exactly
>> 1 child with all of its unconfirmed parents. After deduplication, the
>> package
>> may be exactly the same, empty, 1 child, 1 child with just some of its
>> unconfirmed parents, etc. Note that it's possible for the parents to be
>> indirect
>> descendants/ancestors of one another, or for parent and child to share a
>> parent,
>> so we cannot make any other topology assumptions.
>>
>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>> parents
>> makes it possible to fee-bump a batch of transactions. Restricting
>> packages to a
>> defined topology is also easier to reason about and simplifies the
>> validation
>> logic greatly. Multi-parent-1-child allows us to think of the package as
>> one big
>> transaction, where:
>>
>> - Inputs = all the inputs of parents + inputs of the child that come from
>>   confirmed UTXOs
>> - Outputs = all the outputs of the child + all outputs of the parents that
>>   aren't spent by other transactions in the package
>>
>> Examples of packages that follow this rule (variations of example A show
>> some
>> possibilities after deduplication): ![image][15]
>>
>> #### Fee-Related Checks Use Package Feerate
>>
>> Package Feerate = the total modified fees divided by the total virtual
>> size of
>> all transactions in the package.
>>
>> To meet the two feerate requirements of a mempool, i.e., the
>> pre-configured
>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>> feerate, the
>> total package feerate is used instead of the individual feerate. The
>> individual
>> transactions are allowed to be below feerate requirements if the package
>> meets
>> the feerate requirements. For example, the parent(s) in the package can
>> have 0
>> fees but be paid for by the child.
>>
>> *Rationale*: This can be thought of as "CPFP within a package," solving
>> the
>> issue of a parent not meeting minimum fees on its own. This allows L2
>> applications to adjust their fees at broadcast time instead of
>> overshooting or
>> risking getting stuck/pinned.
>>
>> We use the package feerate of the package *after deduplication*.
>>
>> *Rationale*:  It would be incorrect to use the fees of transactions that
>> are
>> already in the mempool, as we do not want a transaction's fees to be
>> double-counted for both its individual RBF and package RBF.
>>
>> Examples F and G [14] show the same package, but P1 is submitted
>> individually before
>> the package in example G. In example F, we can see that the 300vB package
>> pays
>> an additional 200sat in fees, which is not enough to pay for its own
>> bandwidth
>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>> but
>> using P1's fees again during package submission would make it look like a
>> 300sat
>> increase for a 200vB package. Even including its fees and size would not
>> be
>> sufficient in this example, since the 300sat looks like enough for the
>> 300vB
>> package. The calculcation after deduplication is 100sat increase for a
>> package
>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>> have a
>> size of 100vB.
>>
>> #### Package RBF
>>
>> If a package meets feerate requirements as a package, the parents in the
>> transaction are allowed to replace-by-fee mempool transactions. The child
>> cannot
>> replace mempool transactions. Multiple transactions can replace the same
>> transaction, but in order to be valid, none of the transactions can try to
>> replace an ancestor of another transaction in the same package (which
>> would thus
>> make its inputs unavailable).
>>
>> *Rationale*: Even if we are using package feerate, a package will not
>> propagate
>> as intended if RBF still requires each individual transaction to meet the
>> feerate requirements.
>>
>> We use a set of rules slightly modified from BIP125 as follows:
>>
>> ##### Signaling (Rule #1)
>>
>> All mempool transactions to be replaced must signal replaceability.
>>
>> *Rationale*: Package RBF signaling logic should be the same for package
>> RBF and
>> single transaction acceptance. This would be updated if single transaction
>> validation moves to full RBF.
>>
>> ##### New Unconfirmed Inputs (Rule #2)
>>
>> A package may include new unconfirmed inputs, but the ancestor feerate of
>> the
>> child must be at least as high as the ancestor feerates of every
>> transaction
>> being replaced. This is contrary to BIP125#2, which states "The
>> replacement
>> transaction may only include an unconfirmed input if that input was
>> included in
>> one of the original transactions. (An unconfirmed input spends an output
>> from a
>> currently-unconfirmed transaction.)"
>>
>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>> transaction has a higher ancestor score than the original transaction(s)
>> (see
>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>> can lower the
>> ancestor score of the replacement transaction. P1 is trying to replace
>> M1, and
>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>> M2 pays
>> 100sat. Assume all transactions have a size of 100vB. While, in
>> isolation, P1
>> looks like a better mining candidate than M1, it must be mined with M2,
>> so its
>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
>> feerate, which is 6sat/vB.
>>
>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>> transactions in the package can spend new unconfirmed inputs." Example J
>> [17] shows
>> why, if any of the package transactions have ancestors, package feerate
>> is no
>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
>> the
>> replacement transaction in an RBF), we're actually interested in the
>> entire
>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>> P2, and
>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>> only allow
>> the child to have new unconfirmed inputs, either, because it can still
>> cause us
>> to overestimate the package's ancestor score.
>>
>> However, enforcing a rule analogous to BIP125#2 would not only make
>> Package RBF
>> less useful, but would also break Package RBF for packages with parents
>> already
>> in the mempool: if a package parent has already been submitted, it would
>> look
>> like the child is spending a "new" unconfirmed input. In example K [18],
>> we're
>> looking to replace M1 with the entire package including P1, P2, and P3.
>> We must
>> consider the case where one of the parents is already in the mempool (in
>> this
>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>> However,
>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not replace
>> M1
>> with this package.
>>
>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>> strict than
>> BIP125#2. However, we still achieve the same goal of requiring the
>> replacement
>> transactions to have a ancestor score at least as high as the original
>> ones. As
>> a result, the entire package is required to be a higher feerate mining
>> candidate
>> than each of the replaced transactions.
>>
>> Another note: the [comment][13] above the BIP125#2 code in the original
>> RBF
>> implementation suggests that the rule was intended to be temporary.
>>
>> ##### Absolute Fee (Rule #3)
>>
>> The package must increase the absolute fee of the mempool, i.e. the total
>> fees
>> of the package must be higher than the absolute fees of the mempool
>> transactions
>> it replaces. Combined with the CPFP rule above, this differs from BIP125
>> Rule #3
>> - an individual transaction in the package may have lower fees than the
>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>> child
>> pays for RBF.
>>
>> ##### Feerate (Rule #4)
>>
>> The package must pay for its own bandwidth; the package feerate must be
>> higher
>> than the replaced transactions by at least minimum relay feerate
>> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
>> from
>> BIP125 Rule #4 - an individual transaction in the package can have a lower
>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>> fees,
>> and the child pays for RBF.
>>
>> ##### Total Number of Replaced Transactions (Rule #5)
>>
>> The package cannot replace more than 100 mempool transactions. This is
>> identical
>> to BIP125 Rule #5.
>>
>> ### Expected FAQs
>>
>> 1. Is it possible for only some of the package to make it into the
>> mempool?
>>
>>    Yes, it is. However, since we evict transactions from the mempool by
>> descendant score and the package child is supposed to be sponsoring the
>> fees of
>> its parents, the most common scenario would be all-or-nothing. This is
>> incentive-compatible. In fact, to be conservative, package validation
>> should
>> begin by trying to submit all of the transactions individually, and only
>> use the
>> package mempool acceptance logic if the parents fail due to low feerate.
>>
>> 2. Should we allow packages to contain already-confirmed transactions?
>>
>>     No, for practical reasons. In mempool validation, we actually aren't
>> able to
>> tell with 100% confidence if we are looking at a transaction that has
>> already
>> confirmed, because we look up inputs using a UTXO set. If we have
>> historical
>> block data, it's possible to look for it, but this is inefficient, not
>> always
>> possible for pruning nodes, and unnecessary because we're not going to do
>> anything with the transaction anyway. As such, we already have the
>> expectation
>> that transaction relay is somewhat "stateful" i.e. nobody should be
>> relaying
>> transactions that have already been confirmed. Similarly, we shouldn't be
>> relaying packages that contain already-confirmed transactions.
>>
>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>> [2]:
>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>> [3]:
>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>> [13]:
>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>> [14]:
>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>> [15]:
>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>> [16]:
>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>> [17]:
>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>> [18]:
>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>> [19]:
>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>> [20]:
>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>

[-- Attachment #2: Type: text/html, Size: 28766 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-21 11:18   ` Gloria Zhao
@ 2021-09-21 15:18     ` Bastien TEINTURIER
  2021-09-21 16:42       ` Gloria Zhao
  0 siblings, 1 reply; 16+ messages in thread
From: Bastien TEINTURIER @ 2021-09-21 15:18 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 27891 bytes --]

Hi Gloria,

> I believe this attack is mitigated as long as we attempt to submit
transactions individually

Unfortunately not, as there exists a pinning scenario in LN where a
different commit tx is pinned, but you actually can't know which one.

Since I really like your diagrams, I made one as well to illustrate:
https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg

Here the issue is that a revoked commitment tx A' is pinned in other
mempools, with a long chain of descendants (or descendants that reach
the maximum replaceable size).

We would really like A + C to be able to replace this pinned A'.
We can't submit individually because A on its own won't replace A'...

> I would note that this proposal doesn't accommodate something like
diagram B, where C is getting CPFP carve out and wants to bring a +1

No worries, that case shouldn't be a concern.
I believe any L2 protocol can always ensure it confirms such tx trees
"one depth after the other" without impacting funds safety, so it
only needs to ensure A + C can get into mempools.

Thanks,
Bastien

Le mar. 21 sept. 2021 à 13:18, Gloria Zhao <gloriajzhao@gmail•com> a écrit :

> Hi Bastien,
>
> Thank you for your feedback!
>
> > In your example we have a parent transaction A already in the mempool
> > and an unrelated child B. We submit a package C + D where C spends
> > another of A's inputs. You're highlighting that this package may be
> > rejected because of the unrelated transaction(s) B.
>
> > The way I see this, an attacker can abuse this rule to ensure
> > transaction A stays pinned in the mempool without confirming by
> > broadcasting a set of child transactions that reach these limits
> > and pay low fees (where A would be a commit tx in LN).
>
> I believe you are describing a pinning attack in which your adversarial
> counterparty attempts to monopolize the mempool descendant limit of the
> shared  transaction A in order to prevent you from submitting a fee-bumping
> child C; I've tried to illustrate this as diagram A here:
> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
> (please let me know if I'm misunderstanding).
>
> I believe this attack is mitigated as long as we attempt to submit
> transactions individually (and thus take advantage of CPFP carve out)
> before attempting package validation. So, in scenario A2, even if the
> mempool receives a package with A+C, it would deduplicate A, submit C as an
> individual transaction, and allow it due to the CPFP carve out exemption. A
> more general goal is: if a transaction would propagate successfully on its
> own now, it should still propagate regardless of whether it is included in
> a package. The best way to ensure this, as far as I can tell, is to always
> try to submit them individually first.
>
> I would note that this proposal doesn't accommodate something like diagram
> B, where C is getting CPFP carve out and wants to bring a +1 (e.g. C has
> very low fees and is bumped by D). I don't think this is a use case since C
> should be the one fee-bumping A, but since we're talking about limitations
> around the CPFP carve out, this is it.
>
> Let me know if this addresses your concerns?
>
> Thanks,
> Gloria
>
> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER <bastien@acinq•fr>
> wrote:
>
>> Hi Gloria,
>>
>> Thanks for this detailed post!
>>
>> The illustrations you provided are very useful for this kind of graph
>> topology problems.
>>
>> The rules you lay out for package RBF look good to me at first glance
>> as there are some subtle improvements compared to BIP 125.
>>
>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>
>> I have a question regarding this rule, as your example 2C could be
>> concerning for LN (unless I didn't understand it correctly).
>>
>> This also touches on the package RBF rule 5 ("The package cannot
>> replace more than 100 mempool transactions.")
>>
>> In your example we have a parent transaction A already in the mempool
>> and an unrelated child B. We submit a package C + D where C spends
>> another of A's inputs. You're highlighting that this package may be
>> rejected because of the unrelated transaction(s) B.
>>
>> The way I see this, an attacker can abuse this rule to ensure
>> transaction A stays pinned in the mempool without confirming by
>> broadcasting a set of child transactions that reach these limits
>> and pay low fees (where A would be a commit tx in LN).
>>
>> We had to create the CPFP carve-out rule explicitly to work around
>> this limitation, and I think it would be necessary for package RBF
>> as well, because in such cases we do want to be able to submit a
>> package A + C where C pays high fees to speed up A's confirmation,
>> regardless of unrelated unconfirmed children of A...
>>
>> We could submit only C to benefit from the existing CPFP carve-out
>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>> but other remote mempools do.
>>
>> Is my concern justified? Is this something that we should dig into a
>> bit deeper?
>>
>> Thanks,
>> Bastien
>>
>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>
>>> Hi there,
>>>
>>> I'm writing to propose a set of mempool policy changes to enable package
>>> validation (in preparation for package relay) in Bitcoin Core. These
>>> would not
>>> be consensus or P2P protocol changes. However, since mempool policy
>>> significantly affects transaction propagation, I believe this is
>>> relevant for
>>> the mailing list.
>>>
>>> My proposal enables packages consisting of multiple parents and 1 child.
>>> If you
>>> develop software that relies on specific transaction relay assumptions
>>> and/or
>>> are interested in using package relay in the future, I'm very interested
>>> to hear
>>> your feedback on the utility or restrictiveness of these package
>>> policies for
>>> your use cases.
>>>
>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>> PR#22290][1].
>>>
>>> An illustrated version of this post can be found at
>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>> I have also linked the images below.
>>>
>>> ## Background
>>>
>>> Feel free to skip this section if you are already familiar with mempool
>>> policy
>>> and package relay terminology.
>>>
>>> ### Terminology Clarifications
>>>
>>> * Package = an ordered list of related transactions, representable by a
>>> Directed
>>>   Acyclic Graph.
>>> * Package Feerate = the total modified fees divided by the total virtual
>>> size of
>>>   all transactions in the package.
>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>> the user
>>>       with `prioritisetransaction`. As such, we expect this to vary
>>> across
>>> mempools.
>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>> [BIP141
>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>> Core][3].
>>>     - Note that feerate is not necessarily based on the base fees and
>>> serialized
>>>       size.
>>>
>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>> incentives to
>>>   boost a transaction's candidacy for inclusion in a block, including
>>> Child Pays
>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
>>> mempool policy is to recognize when the new transaction is more
>>> economical to
>>> mine than the original one(s) but not open DoS vectors, so there are some
>>> limitations.
>>>
>>> ### Policy
>>>
>>> The purpose of the mempool is to store the best (to be most
>>> incentive-compatible
>>> with miners, highest feerate) candidates for inclusion in a block.
>>> Miners use
>>> the mempool to build block templates. The mempool is also useful as a
>>> cache for
>>> boosting block relay and validation performance, aiding transaction
>>> relay, and
>>> generating feerate estimations.
>>>
>>> Ideally, all consensus-valid transactions paying reasonable fees should
>>> make it
>>> to miners through normal transaction relay, without any special
>>> connectivity or
>>> relationships with miners. On the other hand, nodes do not have unlimited
>>> resources, and a P2P network designed to let any honest node broadcast
>>> their
>>> transactions also exposes the transaction validation engine to DoS
>>> attacks from
>>> malicious peers.
>>>
>>> As such, for unconfirmed transactions we are considering for our
>>> mempool, we
>>> apply a set of validation rules in addition to consensus, primarily to
>>> protect
>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>> transactions. We call this mempool _policy_: a set of (configurable,
>>> node-specific) rules that transactions must abide by in order to be
>>> accepted
>>> into our mempool. Transaction "Standardness" rules and mempool
>>> restrictions such
>>> as "too-long-mempool-chain" are both examples of policy.
>>>
>>> ### Package Relay and Package Mempool Accept
>>>
>>> In transaction relay, we currently consider transactions one at a time
>>> for
>>> submission to the mempool. This creates a limitation in the node's
>>> ability to
>>> determine which transactions have the highest feerates, since we cannot
>>> take
>>> into account descendants (i.e. cannot use CPFP) until all the
>>> transactions are
>>> in the mempool. Similarly, we cannot use a transaction's descendants when
>>> considering it for RBF. When an individual transaction does not meet the
>>> mempool
>>> minimum feerate and the user isn't able to create a replacement
>>> transaction
>>> directly, it will not be accepted by mempools.
>>>
>>> This limitation presents a security issue for applications and users
>>> relying on
>>> time-sensitive transactions. For example, Lightning and other protocols
>>> create
>>> UTXOs with multiple spending paths, where one counterparty's spending
>>> path opens
>>> up after a timelock, and users are protected from cheating scenarios as
>>> long as
>>> they redeem on-chain in time. A key security assumption is that all
>>> parties'
>>> transactions will propagate and confirm in a timely manner. This
>>> assumption can
>>> be broken if fee-bumping does not work as intended.
>>>
>>> The end goal for Package Relay is to consider multiple transactions at
>>> the same
>>> time, e.g. a transaction with its high-fee child. This may help us better
>>> determine whether transactions should be accepted to our mempool,
>>> especially if
>>> they don't meet fee requirements individually or are better RBF
>>> candidates as a
>>> package. A combination of changes to mempool validation logic, policy,
>>> and
>>> transaction relay allows us to better propagate the transactions with the
>>> highest package feerates to miners, and makes fee-bumping tools more
>>> powerful
>>> for users.
>>>
>>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>>> large
>>> part of the changes are in the mempool's package validation logic. We
>>> call this
>>> *Package Mempool Accept*.
>>>
>>> ### Previous Work
>>>
>>> * Given that mempool validation is DoS-sensitive and complex, it would be
>>>   dangerous to haphazardly tack on package validation logic. Many
>>> efforts have
>>> been made to make mempool validation less opaque (see [#16400][4],
>>> [#21062][5],
>>> [#22675][6], [#22796][7]).
>>> * [#20833][8] Added basic capabilities for package validation, test
>>> accepts only
>>>   (no submission to mempool).
>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>> arbitrary
>>>   packages. Still test accepts only.
>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>
>>> ### Existing Package Rules
>>>
>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>> consider
>>> them as "given" in the rest of this document, though they can be
>>> changed, since
>>> package validation is test-accept only right now.
>>>
>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>
>>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>>> limits.
>>> Presumably, transactions in a package are all related, so exceeding this
>>> limit
>>> would mean that the package can either be split up or it wouldn't pass
>>> this
>>> mempool policy.
>>>
>>> 2. Packages must be topologically sorted: if any dependencies exist
>>> between
>>> transactions, parents must appear somewhere before children. [8]
>>>
>>> 3. A package cannot have conflicting transactions, i.e. none of them can
>>> spend
>>> the same inputs. This also means there cannot be duplicate transactions.
>>> [8]
>>>
>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>> test
>>> accept, the union of all of their descendants and ancestors is
>>> considered. This
>>> is essentially a "worst case" heuristic where every transaction in the
>>> package
>>> is treated as each other's ancestor and descendant. [8]
>>> Packages for which ancestor/descendant limits are accurately captured by
>>> this
>>> heuristic: [19]
>>>
>>> There are also limitations such as the fact that CPFP carve out is not
>>> applied
>>> to package transactions. #20833 also disables RBF in package validation;
>>> this
>>> proposal overrides that to allow packages to use RBF.
>>>
>>> ## Proposed Changes
>>>
>>> The next step in the Package Mempool Accept project is to implement
>>> submission
>>> to mempool, initially through RPC only. This allows us to test the
>>> submission
>>> logic before exposing it on P2P.
>>>
>>> ### Summary
>>>
>>> - Packages may contain already-in-mempool transactions.
>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>> - Fee-related checks use the package feerate. This means that wallets can
>>> create a package that utilizes CPFP.
>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>> similar
>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>
>>> There is a draft implementation in [#22290][1]. It is WIP, but feedback
>>> is
>>> always welcome.
>>>
>>> ### Details
>>>
>>> #### Packages May Contain Already-in-Mempool Transactions
>>>
>>> A package may contain transactions that are already in the mempool. We
>>> remove
>>> ("deduplicate") those transactions from the package for the purposes of
>>> package
>>> mempool acceptance. If a package is empty after deduplication, we do
>>> nothing.
>>>
>>> *Rationale*: Mempools vary across the network. It's possible for a
>>> parent to be
>>> accepted to the mempool of a peer on its own due to differences in
>>> policy and
>>> fee market fluctuations. We should not reject or penalize the entire
>>> package for
>>> an individual transaction as that could be a censorship vector.
>>>
>>> #### Packages Are Multi-Parent-1-Child
>>>
>>> Only packages of a specific topology are permitted. Namely, a package is
>>> exactly
>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>> package
>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>> unconfirmed parents, etc. Note that it's possible for the parents to be
>>> indirect
>>> descendants/ancestors of one another, or for parent and child to share a
>>> parent,
>>> so we cannot make any other topology assumptions.
>>>
>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>> parents
>>> makes it possible to fee-bump a batch of transactions. Restricting
>>> packages to a
>>> defined topology is also easier to reason about and simplifies the
>>> validation
>>> logic greatly. Multi-parent-1-child allows us to think of the package as
>>> one big
>>> transaction, where:
>>>
>>> - Inputs = all the inputs of parents + inputs of the child that come from
>>>   confirmed UTXOs
>>> - Outputs = all the outputs of the child + all outputs of the parents
>>> that
>>>   aren't spent by other transactions in the package
>>>
>>> Examples of packages that follow this rule (variations of example A show
>>> some
>>> possibilities after deduplication): ![image][15]
>>>
>>> #### Fee-Related Checks Use Package Feerate
>>>
>>> Package Feerate = the total modified fees divided by the total virtual
>>> size of
>>> all transactions in the package.
>>>
>>> To meet the two feerate requirements of a mempool, i.e., the
>>> pre-configured
>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>> feerate, the
>>> total package feerate is used instead of the individual feerate. The
>>> individual
>>> transactions are allowed to be below feerate requirements if the package
>>> meets
>>> the feerate requirements. For example, the parent(s) in the package can
>>> have 0
>>> fees but be paid for by the child.
>>>
>>> *Rationale*: This can be thought of as "CPFP within a package," solving
>>> the
>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>> applications to adjust their fees at broadcast time instead of
>>> overshooting or
>>> risking getting stuck/pinned.
>>>
>>> We use the package feerate of the package *after deduplication*.
>>>
>>> *Rationale*:  It would be incorrect to use the fees of transactions that
>>> are
>>> already in the mempool, as we do not want a transaction's fees to be
>>> double-counted for both its individual RBF and package RBF.
>>>
>>> Examples F and G [14] show the same package, but P1 is submitted
>>> individually before
>>> the package in example G. In example F, we can see that the 300vB
>>> package pays
>>> an additional 200sat in fees, which is not enough to pay for its own
>>> bandwidth
>>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>>> but
>>> using P1's fees again during package submission would make it look like
>>> a 300sat
>>> increase for a 200vB package. Even including its fees and size would not
>>> be
>>> sufficient in this example, since the 300sat looks like enough for the
>>> 300vB
>>> package. The calculcation after deduplication is 100sat increase for a
>>> package
>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>> have a
>>> size of 100vB.
>>>
>>> #### Package RBF
>>>
>>> If a package meets feerate requirements as a package, the parents in the
>>> transaction are allowed to replace-by-fee mempool transactions. The
>>> child cannot
>>> replace mempool transactions. Multiple transactions can replace the same
>>> transaction, but in order to be valid, none of the transactions can try
>>> to
>>> replace an ancestor of another transaction in the same package (which
>>> would thus
>>> make its inputs unavailable).
>>>
>>> *Rationale*: Even if we are using package feerate, a package will not
>>> propagate
>>> as intended if RBF still requires each individual transaction to meet the
>>> feerate requirements.
>>>
>>> We use a set of rules slightly modified from BIP125 as follows:
>>>
>>> ##### Signaling (Rule #1)
>>>
>>> All mempool transactions to be replaced must signal replaceability.
>>>
>>> *Rationale*: Package RBF signaling logic should be the same for package
>>> RBF and
>>> single transaction acceptance. This would be updated if single
>>> transaction
>>> validation moves to full RBF.
>>>
>>> ##### New Unconfirmed Inputs (Rule #2)
>>>
>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>> of the
>>> child must be at least as high as the ancestor feerates of every
>>> transaction
>>> being replaced. This is contrary to BIP125#2, which states "The
>>> replacement
>>> transaction may only include an unconfirmed input if that input was
>>> included in
>>> one of the original transactions. (An unconfirmed input spends an output
>>> from a
>>> currently-unconfirmed transaction.)"
>>>
>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>> transaction has a higher ancestor score than the original transaction(s)
>>> (see
>>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>>> can lower the
>>> ancestor score of the replacement transaction. P1 is trying to replace
>>> M1, and
>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>>> M2 pays
>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>> isolation, P1
>>> looks like a better mining candidate than M1, it must be mined with M2,
>>> so its
>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
>>> feerate, which is 6sat/vB.
>>>
>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>> transactions in the package can spend new unconfirmed inputs." Example J
>>> [17] shows
>>> why, if any of the package transactions have ancestors, package feerate
>>> is no
>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
>>> the
>>> replacement transaction in an RBF), we're actually interested in the
>>> entire
>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>>> P2, and
>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>> only allow
>>> the child to have new unconfirmed inputs, either, because it can still
>>> cause us
>>> to overestimate the package's ancestor score.
>>>
>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>> Package RBF
>>> less useful, but would also break Package RBF for packages with parents
>>> already
>>> in the mempool: if a package parent has already been submitted, it would
>>> look
>>> like the child is spending a "new" unconfirmed input. In example K [18],
>>> we're
>>> looking to replace M1 with the entire package including P1, P2, and P3.
>>> We must
>>> consider the case where one of the parents is already in the mempool (in
>>> this
>>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>>> However,
>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>> replace M1
>>> with this package.
>>>
>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>> strict than
>>> BIP125#2. However, we still achieve the same goal of requiring the
>>> replacement
>>> transactions to have a ancestor score at least as high as the original
>>> ones. As
>>> a result, the entire package is required to be a higher feerate mining
>>> candidate
>>> than each of the replaced transactions.
>>>
>>> Another note: the [comment][13] above the BIP125#2 code in the original
>>> RBF
>>> implementation suggests that the rule was intended to be temporary.
>>>
>>> ##### Absolute Fee (Rule #3)
>>>
>>> The package must increase the absolute fee of the mempool, i.e. the
>>> total fees
>>> of the package must be higher than the absolute fees of the mempool
>>> transactions
>>> it replaces. Combined with the CPFP rule above, this differs from BIP125
>>> Rule #3
>>> - an individual transaction in the package may have lower fees than the
>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>> child
>>> pays for RBF.
>>>
>>> ##### Feerate (Rule #4)
>>>
>>> The package must pay for its own bandwidth; the package feerate must be
>>> higher
>>> than the replaced transactions by at least minimum relay feerate
>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
>>> from
>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>> lower
>>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>>> fees,
>>> and the child pays for RBF.
>>>
>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>
>>> The package cannot replace more than 100 mempool transactions. This is
>>> identical
>>> to BIP125 Rule #5.
>>>
>>> ### Expected FAQs
>>>
>>> 1. Is it possible for only some of the package to make it into the
>>> mempool?
>>>
>>>    Yes, it is. However, since we evict transactions from the mempool by
>>> descendant score and the package child is supposed to be sponsoring the
>>> fees of
>>> its parents, the most common scenario would be all-or-nothing. This is
>>> incentive-compatible. In fact, to be conservative, package validation
>>> should
>>> begin by trying to submit all of the transactions individually, and only
>>> use the
>>> package mempool acceptance logic if the parents fail due to low feerate.
>>>
>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>
>>>     No, for practical reasons. In mempool validation, we actually aren't
>>> able to
>>> tell with 100% confidence if we are looking at a transaction that has
>>> already
>>> confirmed, because we look up inputs using a UTXO set. If we have
>>> historical
>>> block data, it's possible to look for it, but this is inefficient, not
>>> always
>>> possible for pruning nodes, and unnecessary because we're not going to do
>>> anything with the transaction anyway. As such, we already have the
>>> expectation
>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>> relaying
>>> transactions that have already been confirmed. Similarly, we shouldn't be
>>> relaying packages that contain already-confirmed transactions.
>>>
>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>> [2]:
>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>> [3]:
>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>> [13]:
>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>> [14]:
>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>> [15]:
>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>> [16]:
>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>> [17]:
>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>> [18]:
>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>> [19]:
>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>> [20]:
>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>

[-- Attachment #2: Type: text/html, Size: 30500 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-21 15:18     ` Bastien TEINTURIER
@ 2021-09-21 16:42       ` Gloria Zhao
  2021-09-22  7:10         ` Bastien TEINTURIER
  0 siblings, 1 reply; 16+ messages in thread
From: Gloria Zhao @ 2021-09-21 16:42 UTC (permalink / raw)
  To: Bastien TEINTURIER; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 29732 bytes --]

Hi Bastien,

Excellent diagram :D

> Here the issue is that a revoked commitment tx A' is pinned in other
> mempools, with a long chain of descendants (or descendants that reach
> the maximum replaceable size).
> We would really like A + C to be able to replace this pinned A'.
> We can't submit individually because A on its own won't replace A'...

Right, this is a key motivation for having Package RBF. In this case, A+C
can replace A' + B1...B24.

Due to the descendant limit (each node operator can increase it on their
own node, but the default is 25), A' should have no more than 25
descendants, even including CPFP carve out. As long as A only conflicts
with A', it won't be trying to replace more than 100 transactions. The
proposed package RBF will allow C to pay for A's conflicts, since their
package feerate is used in the fee comparisons. A is not a descendant of
A', so the existence of B1...B24 does not prevent the replacement.

Best,
Gloria

On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER <bastien@acinq•fr> wrote:

> Hi Gloria,
>
> > I believe this attack is mitigated as long as we attempt to submit
> transactions individually
>
> Unfortunately not, as there exists a pinning scenario in LN where a
> different commit tx is pinned, but you actually can't know which one.
>
> Since I really like your diagrams, I made one as well to illustrate:
>
> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>
> Here the issue is that a revoked commitment tx A' is pinned in other
> mempools, with a long chain of descendants (or descendants that reach
> the maximum replaceable size).
>
> We would really like A + C to be able to replace this pinned A'.
> We can't submit individually because A on its own won't replace A'...
>
> > I would note that this proposal doesn't accommodate something like
> diagram B, where C is getting CPFP carve out and wants to bring a +1
>
> No worries, that case shouldn't be a concern.
> I believe any L2 protocol can always ensure it confirms such tx trees
> "one depth after the other" without impacting funds safety, so it
> only needs to ensure A + C can get into mempools.
>
> Thanks,
> Bastien
>
> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao <gloriajzhao@gmail•com> a
> écrit :
>
>> Hi Bastien,
>>
>> Thank you for your feedback!
>>
>> > In your example we have a parent transaction A already in the mempool
>> > and an unrelated child B. We submit a package C + D where C spends
>> > another of A's inputs. You're highlighting that this package may be
>> > rejected because of the unrelated transaction(s) B.
>>
>> > The way I see this, an attacker can abuse this rule to ensure
>> > transaction A stays pinned in the mempool without confirming by
>> > broadcasting a set of child transactions that reach these limits
>> > and pay low fees (where A would be a commit tx in LN).
>>
>> I believe you are describing a pinning attack in which your adversarial
>> counterparty attempts to monopolize the mempool descendant limit of the
>> shared  transaction A in order to prevent you from submitting a fee-bumping
>> child C; I've tried to illustrate this as diagram A here:
>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>> (please let me know if I'm misunderstanding).
>>
>> I believe this attack is mitigated as long as we attempt to submit
>> transactions individually (and thus take advantage of CPFP carve out)
>> before attempting package validation. So, in scenario A2, even if the
>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>> individual transaction, and allow it due to the CPFP carve out exemption. A
>> more general goal is: if a transaction would propagate successfully on its
>> own now, it should still propagate regardless of whether it is included in
>> a package. The best way to ensure this, as far as I can tell, is to always
>> try to submit them individually first.
>>
>> I would note that this proposal doesn't accommodate something like
>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>> C has very low fees and is bumped by D). I don't think this is a use case
>> since C should be the one fee-bumping A, but since we're talking about
>> limitations around the CPFP carve out, this is it.
>>
>> Let me know if this addresses your concerns?
>>
>> Thanks,
>> Gloria
>>
>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER <bastien@acinq•fr>
>> wrote:
>>
>>> Hi Gloria,
>>>
>>> Thanks for this detailed post!
>>>
>>> The illustrations you provided are very useful for this kind of graph
>>> topology problems.
>>>
>>> The rules you lay out for package RBF look good to me at first glance
>>> as there are some subtle improvements compared to BIP 125.
>>>
>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>
>>> I have a question regarding this rule, as your example 2C could be
>>> concerning for LN (unless I didn't understand it correctly).
>>>
>>> This also touches on the package RBF rule 5 ("The package cannot
>>> replace more than 100 mempool transactions.")
>>>
>>> In your example we have a parent transaction A already in the mempool
>>> and an unrelated child B. We submit a package C + D where C spends
>>> another of A's inputs. You're highlighting that this package may be
>>> rejected because of the unrelated transaction(s) B.
>>>
>>> The way I see this, an attacker can abuse this rule to ensure
>>> transaction A stays pinned in the mempool without confirming by
>>> broadcasting a set of child transactions that reach these limits
>>> and pay low fees (where A would be a commit tx in LN).
>>>
>>> We had to create the CPFP carve-out rule explicitly to work around
>>> this limitation, and I think it would be necessary for package RBF
>>> as well, because in such cases we do want to be able to submit a
>>> package A + C where C pays high fees to speed up A's confirmation,
>>> regardless of unrelated unconfirmed children of A...
>>>
>>> We could submit only C to benefit from the existing CPFP carve-out
>>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>>> but other remote mempools do.
>>>
>>> Is my concern justified? Is this something that we should dig into a
>>> bit deeper?
>>>
>>> Thanks,
>>> Bastien
>>>
>>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>
>>>> Hi there,
>>>>
>>>> I'm writing to propose a set of mempool policy changes to enable package
>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>> would not
>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>> significantly affects transaction propagation, I believe this is
>>>> relevant for
>>>> the mailing list.
>>>>
>>>> My proposal enables packages consisting of multiple parents and 1
>>>> child. If you
>>>> develop software that relies on specific transaction relay assumptions
>>>> and/or
>>>> are interested in using package relay in the future, I'm very
>>>> interested to hear
>>>> your feedback on the utility or restrictiveness of these package
>>>> policies for
>>>> your use cases.
>>>>
>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>> PR#22290][1].
>>>>
>>>> An illustrated version of this post can be found at
>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>> I have also linked the images below.
>>>>
>>>> ## Background
>>>>
>>>> Feel free to skip this section if you are already familiar with mempool
>>>> policy
>>>> and package relay terminology.
>>>>
>>>> ### Terminology Clarifications
>>>>
>>>> * Package = an ordered list of related transactions, representable by a
>>>> Directed
>>>>   Acyclic Graph.
>>>> * Package Feerate = the total modified fees divided by the total
>>>> virtual size of
>>>>   all transactions in the package.
>>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>>> the user
>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>> across
>>>> mempools.
>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>> [BIP141
>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>> Core][3].
>>>>     - Note that feerate is not necessarily based on the base fees and
>>>> serialized
>>>>       size.
>>>>
>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>> incentives to
>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>> Child Pays
>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention
>>>> in
>>>> mempool policy is to recognize when the new transaction is more
>>>> economical to
>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>> some
>>>> limitations.
>>>>
>>>> ### Policy
>>>>
>>>> The purpose of the mempool is to store the best (to be most
>>>> incentive-compatible
>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>> Miners use
>>>> the mempool to build block templates. The mempool is also useful as a
>>>> cache for
>>>> boosting block relay and validation performance, aiding transaction
>>>> relay, and
>>>> generating feerate estimations.
>>>>
>>>> Ideally, all consensus-valid transactions paying reasonable fees should
>>>> make it
>>>> to miners through normal transaction relay, without any special
>>>> connectivity or
>>>> relationships with miners. On the other hand, nodes do not have
>>>> unlimited
>>>> resources, and a P2P network designed to let any honest node broadcast
>>>> their
>>>> transactions also exposes the transaction validation engine to DoS
>>>> attacks from
>>>> malicious peers.
>>>>
>>>> As such, for unconfirmed transactions we are considering for our
>>>> mempool, we
>>>> apply a set of validation rules in addition to consensus, primarily to
>>>> protect
>>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>> node-specific) rules that transactions must abide by in order to be
>>>> accepted
>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>> restrictions such
>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>
>>>> ### Package Relay and Package Mempool Accept
>>>>
>>>> In transaction relay, we currently consider transactions one at a time
>>>> for
>>>> submission to the mempool. This creates a limitation in the node's
>>>> ability to
>>>> determine which transactions have the highest feerates, since we cannot
>>>> take
>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>> transactions are
>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>> when
>>>> considering it for RBF. When an individual transaction does not meet
>>>> the mempool
>>>> minimum feerate and the user isn't able to create a replacement
>>>> transaction
>>>> directly, it will not be accepted by mempools.
>>>>
>>>> This limitation presents a security issue for applications and users
>>>> relying on
>>>> time-sensitive transactions. For example, Lightning and other protocols
>>>> create
>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>> path opens
>>>> up after a timelock, and users are protected from cheating scenarios as
>>>> long as
>>>> they redeem on-chain in time. A key security assumption is that all
>>>> parties'
>>>> transactions will propagate and confirm in a timely manner. This
>>>> assumption can
>>>> be broken if fee-bumping does not work as intended.
>>>>
>>>> The end goal for Package Relay is to consider multiple transactions at
>>>> the same
>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>> better
>>>> determine whether transactions should be accepted to our mempool,
>>>> especially if
>>>> they don't meet fee requirements individually or are better RBF
>>>> candidates as a
>>>> package. A combination of changes to mempool validation logic, policy,
>>>> and
>>>> transaction relay allows us to better propagate the transactions with
>>>> the
>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>> powerful
>>>> for users.
>>>>
>>>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>>>> large
>>>> part of the changes are in the mempool's package validation logic. We
>>>> call this
>>>> *Package Mempool Accept*.
>>>>
>>>> ### Previous Work
>>>>
>>>> * Given that mempool validation is DoS-sensitive and complex, it would
>>>> be
>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>> efforts have
>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>> [#21062][5],
>>>> [#22675][6], [#22796][7]).
>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>> accepts only
>>>>   (no submission to mempool).
>>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>>> arbitrary
>>>>   packages. Still test accepts only.
>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>
>>>> ### Existing Package Rules
>>>>
>>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>>> consider
>>>> them as "given" in the rest of this document, though they can be
>>>> changed, since
>>>> package validation is test-accept only right now.
>>>>
>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>
>>>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>>>> limits.
>>>> Presumably, transactions in a package are all related, so exceeding
>>>> this limit
>>>> would mean that the package can either be split up or it wouldn't pass
>>>> this
>>>> mempool policy.
>>>>
>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>> between
>>>> transactions, parents must appear somewhere before children. [8]
>>>>
>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>> can spend
>>>> the same inputs. This also means there cannot be duplicate
>>>> transactions. [8]
>>>>
>>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>>> test
>>>> accept, the union of all of their descendants and ancestors is
>>>> considered. This
>>>> is essentially a "worst case" heuristic where every transaction in the
>>>> package
>>>> is treated as each other's ancestor and descendant. [8]
>>>> Packages for which ancestor/descendant limits are accurately captured
>>>> by this
>>>> heuristic: [19]
>>>>
>>>> There are also limitations such as the fact that CPFP carve out is not
>>>> applied
>>>> to package transactions. #20833 also disables RBF in package
>>>> validation; this
>>>> proposal overrides that to allow packages to use RBF.
>>>>
>>>> ## Proposed Changes
>>>>
>>>> The next step in the Package Mempool Accept project is to implement
>>>> submission
>>>> to mempool, initially through RPC only. This allows us to test the
>>>> submission
>>>> logic before exposing it on P2P.
>>>>
>>>> ### Summary
>>>>
>>>> - Packages may contain already-in-mempool transactions.
>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>> - Fee-related checks use the package feerate. This means that wallets
>>>> can
>>>> create a package that utilizes CPFP.
>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>> similar
>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>
>>>> There is a draft implementation in [#22290][1]. It is WIP, but feedback
>>>> is
>>>> always welcome.
>>>>
>>>> ### Details
>>>>
>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>
>>>> A package may contain transactions that are already in the mempool. We
>>>> remove
>>>> ("deduplicate") those transactions from the package for the purposes of
>>>> package
>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>> nothing.
>>>>
>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>> parent to be
>>>> accepted to the mempool of a peer on its own due to differences in
>>>> policy and
>>>> fee market fluctuations. We should not reject or penalize the entire
>>>> package for
>>>> an individual transaction as that could be a censorship vector.
>>>>
>>>> #### Packages Are Multi-Parent-1-Child
>>>>
>>>> Only packages of a specific topology are permitted. Namely, a package
>>>> is exactly
>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>> package
>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>> unconfirmed parents, etc. Note that it's possible for the parents to be
>>>> indirect
>>>> descendants/ancestors of one another, or for parent and child to share
>>>> a parent,
>>>> so we cannot make any other topology assumptions.
>>>>
>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>> parents
>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>> packages to a
>>>> defined topology is also easier to reason about and simplifies the
>>>> validation
>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>> as one big
>>>> transaction, where:
>>>>
>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>> from
>>>>   confirmed UTXOs
>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>> that
>>>>   aren't spent by other transactions in the package
>>>>
>>>> Examples of packages that follow this rule (variations of example A
>>>> show some
>>>> possibilities after deduplication): ![image][15]
>>>>
>>>> #### Fee-Related Checks Use Package Feerate
>>>>
>>>> Package Feerate = the total modified fees divided by the total virtual
>>>> size of
>>>> all transactions in the package.
>>>>
>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>> pre-configured
>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>> feerate, the
>>>> total package feerate is used instead of the individual feerate. The
>>>> individual
>>>> transactions are allowed to be below feerate requirements if the
>>>> package meets
>>>> the feerate requirements. For example, the parent(s) in the package can
>>>> have 0
>>>> fees but be paid for by the child.
>>>>
>>>> *Rationale*: This can be thought of as "CPFP within a package," solving
>>>> the
>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>> applications to adjust their fees at broadcast time instead of
>>>> overshooting or
>>>> risking getting stuck/pinned.
>>>>
>>>> We use the package feerate of the package *after deduplication*.
>>>>
>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>> that are
>>>> already in the mempool, as we do not want a transaction's fees to be
>>>> double-counted for both its individual RBF and package RBF.
>>>>
>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>> individually before
>>>> the package in example G. In example F, we can see that the 300vB
>>>> package pays
>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>> bandwidth
>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>>>> but
>>>> using P1's fees again during package submission would make it look like
>>>> a 300sat
>>>> increase for a 200vB package. Even including its fees and size would
>>>> not be
>>>> sufficient in this example, since the 300sat looks like enough for the
>>>> 300vB
>>>> package. The calculcation after deduplication is 100sat increase for a
>>>> package
>>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>>> have a
>>>> size of 100vB.
>>>>
>>>> #### Package RBF
>>>>
>>>> If a package meets feerate requirements as a package, the parents in the
>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>> child cannot
>>>> replace mempool transactions. Multiple transactions can replace the same
>>>> transaction, but in order to be valid, none of the transactions can try
>>>> to
>>>> replace an ancestor of another transaction in the same package (which
>>>> would thus
>>>> make its inputs unavailable).
>>>>
>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>> propagate
>>>> as intended if RBF still requires each individual transaction to meet
>>>> the
>>>> feerate requirements.
>>>>
>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>
>>>> ##### Signaling (Rule #1)
>>>>
>>>> All mempool transactions to be replaced must signal replaceability.
>>>>
>>>> *Rationale*: Package RBF signaling logic should be the same for package
>>>> RBF and
>>>> single transaction acceptance. This would be updated if single
>>>> transaction
>>>> validation moves to full RBF.
>>>>
>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>
>>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>>> of the
>>>> child must be at least as high as the ancestor feerates of every
>>>> transaction
>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>> replacement
>>>> transaction may only include an unconfirmed input if that input was
>>>> included in
>>>> one of the original transactions. (An unconfirmed input spends an
>>>> output from a
>>>> currently-unconfirmed transaction.)"
>>>>
>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>> transaction has a higher ancestor score than the original
>>>> transaction(s) (see
>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>>>> can lower the
>>>> ancestor score of the replacement transaction. P1 is trying to replace
>>>> M1, and
>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>>>> M2 pays
>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>> isolation, P1
>>>> looks like a better mining candidate than M1, it must be mined with M2,
>>>> so its
>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>> ancestor
>>>> feerate, which is 6sat/vB.
>>>>
>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>> transactions in the package can spend new unconfirmed inputs." Example
>>>> J [17] shows
>>>> why, if any of the package transactions have ancestors, package feerate
>>>> is no
>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>> is the
>>>> replacement transaction in an RBF), we're actually interested in the
>>>> entire
>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>>>> P2, and
>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>> only allow
>>>> the child to have new unconfirmed inputs, either, because it can still
>>>> cause us
>>>> to overestimate the package's ancestor score.
>>>>
>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>> Package RBF
>>>> less useful, but would also break Package RBF for packages with parents
>>>> already
>>>> in the mempool: if a package parent has already been submitted, it
>>>> would look
>>>> like the child is spending a "new" unconfirmed input. In example K
>>>> [18], we're
>>>> looking to replace M1 with the entire package including P1, P2, and P3.
>>>> We must
>>>> consider the case where one of the parents is already in the mempool
>>>> (in this
>>>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>>>> However,
>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>> replace M1
>>>> with this package.
>>>>
>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>> strict than
>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>> replacement
>>>> transactions to have a ancestor score at least as high as the original
>>>> ones. As
>>>> a result, the entire package is required to be a higher feerate mining
>>>> candidate
>>>> than each of the replaced transactions.
>>>>
>>>> Another note: the [comment][13] above the BIP125#2 code in the original
>>>> RBF
>>>> implementation suggests that the rule was intended to be temporary.
>>>>
>>>> ##### Absolute Fee (Rule #3)
>>>>
>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>> total fees
>>>> of the package must be higher than the absolute fees of the mempool
>>>> transactions
>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>> BIP125 Rule #3
>>>> - an individual transaction in the package may have lower fees than the
>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>>> child
>>>> pays for RBF.
>>>>
>>>> ##### Feerate (Rule #4)
>>>>
>>>> The package must pay for its own bandwidth; the package feerate must be
>>>> higher
>>>> than the replaced transactions by at least minimum relay feerate
>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>> differs from
>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>> lower
>>>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>>>> fees,
>>>> and the child pays for RBF.
>>>>
>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>
>>>> The package cannot replace more than 100 mempool transactions. This is
>>>> identical
>>>> to BIP125 Rule #5.
>>>>
>>>> ### Expected FAQs
>>>>
>>>> 1. Is it possible for only some of the package to make it into the
>>>> mempool?
>>>>
>>>>    Yes, it is. However, since we evict transactions from the mempool by
>>>> descendant score and the package child is supposed to be sponsoring the
>>>> fees of
>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>> incentive-compatible. In fact, to be conservative, package validation
>>>> should
>>>> begin by trying to submit all of the transactions individually, and
>>>> only use the
>>>> package mempool acceptance logic if the parents fail due to low feerate.
>>>>
>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>
>>>>     No, for practical reasons. In mempool validation, we actually
>>>> aren't able to
>>>> tell with 100% confidence if we are looking at a transaction that has
>>>> already
>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>> historical
>>>> block data, it's possible to look for it, but this is inefficient, not
>>>> always
>>>> possible for pruning nodes, and unnecessary because we're not going to
>>>> do
>>>> anything with the transaction anyway. As such, we already have the
>>>> expectation
>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>> relaying
>>>> transactions that have already been confirmed. Similarly, we shouldn't
>>>> be
>>>> relaying packages that contain already-confirmed transactions.
>>>>
>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>> [2]:
>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>> [3]:
>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>> [13]:
>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>> [14]:
>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>> [15]:
>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>> [16]:
>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>> [17]:
>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>> [18]:
>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>> [19]:
>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>> [20]:
>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>
>>>

[-- Attachment #2: Type: text/html, Size: 32105 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-21 16:42       ` Gloria Zhao
@ 2021-09-22  7:10         ` Bastien TEINTURIER
  2021-09-22 13:26           ` Gloria Zhao
  0 siblings, 1 reply; 16+ messages in thread
From: Bastien TEINTURIER @ 2021-09-22  7:10 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 30906 bytes --]

Great, thanks for this clarification!

Can you confirm that this won't be an issue either with your
example 2C (in your first set of diagrams)? If I understand it
correctly it shouldn't, but I'd rather be 100% sure.

A package A + C will be able to replace A' + B regardless of
the weight of A' + B?

Thanks,
Bastien

Le mar. 21 sept. 2021 à 18:42, Gloria Zhao <gloriajzhao@gmail•com> a écrit :

> Hi Bastien,
>
> Excellent diagram :D
>
> > Here the issue is that a revoked commitment tx A' is pinned in other
> > mempools, with a long chain of descendants (or descendants that reach
> > the maximum replaceable size).
> > We would really like A + C to be able to replace this pinned A'.
> > We can't submit individually because A on its own won't replace A'...
>
> Right, this is a key motivation for having Package RBF. In this case, A+C
> can replace A' + B1...B24.
>
> Due to the descendant limit (each node operator can increase it on their
> own node, but the default is 25), A' should have no more than 25
> descendants, even including CPFP carve out. As long as A only conflicts
> with A', it won't be trying to replace more than 100 transactions. The
> proposed package RBF will allow C to pay for A's conflicts, since their
> package feerate is used in the fee comparisons. A is not a descendant of
> A', so the existence of B1...B24 does not prevent the replacement.
>
> Best,
> Gloria
>
> On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER <bastien@acinq•fr>
> wrote:
>
>> Hi Gloria,
>>
>> > I believe this attack is mitigated as long as we attempt to submit
>> transactions individually
>>
>> Unfortunately not, as there exists a pinning scenario in LN where a
>> different commit tx is pinned, but you actually can't know which one.
>>
>> Since I really like your diagrams, I made one as well to illustrate:
>>
>> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>>
>> Here the issue is that a revoked commitment tx A' is pinned in other
>> mempools, with a long chain of descendants (or descendants that reach
>> the maximum replaceable size).
>>
>> We would really like A + C to be able to replace this pinned A'.
>> We can't submit individually because A on its own won't replace A'...
>>
>> > I would note that this proposal doesn't accommodate something like
>> diagram B, where C is getting CPFP carve out and wants to bring a +1
>>
>> No worries, that case shouldn't be a concern.
>> I believe any L2 protocol can always ensure it confirms such tx trees
>> "one depth after the other" without impacting funds safety, so it
>> only needs to ensure A + C can get into mempools.
>>
>> Thanks,
>> Bastien
>>
>> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao <gloriajzhao@gmail•com> a
>> écrit :
>>
>>> Hi Bastien,
>>>
>>> Thank you for your feedback!
>>>
>>> > In your example we have a parent transaction A already in the mempool
>>> > and an unrelated child B. We submit a package C + D where C spends
>>> > another of A's inputs. You're highlighting that this package may be
>>> > rejected because of the unrelated transaction(s) B.
>>>
>>> > The way I see this, an attacker can abuse this rule to ensure
>>> > transaction A stays pinned in the mempool without confirming by
>>> > broadcasting a set of child transactions that reach these limits
>>> > and pay low fees (where A would be a commit tx in LN).
>>>
>>> I believe you are describing a pinning attack in which your adversarial
>>> counterparty attempts to monopolize the mempool descendant limit of the
>>> shared  transaction A in order to prevent you from submitting a fee-bumping
>>> child C; I've tried to illustrate this as diagram A here:
>>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>>> (please let me know if I'm misunderstanding).
>>>
>>> I believe this attack is mitigated as long as we attempt to submit
>>> transactions individually (and thus take advantage of CPFP carve out)
>>> before attempting package validation. So, in scenario A2, even if the
>>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>>> individual transaction, and allow it due to the CPFP carve out exemption. A
>>> more general goal is: if a transaction would propagate successfully on its
>>> own now, it should still propagate regardless of whether it is included in
>>> a package. The best way to ensure this, as far as I can tell, is to always
>>> try to submit them individually first.
>>>
>>> I would note that this proposal doesn't accommodate something like
>>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>>> C has very low fees and is bumped by D). I don't think this is a use case
>>> since C should be the one fee-bumping A, but since we're talking about
>>> limitations around the CPFP carve out, this is it.
>>>
>>> Let me know if this addresses your concerns?
>>>
>>> Thanks,
>>> Gloria
>>>
>>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER <bastien@acinq•fr>
>>> wrote:
>>>
>>>> Hi Gloria,
>>>>
>>>> Thanks for this detailed post!
>>>>
>>>> The illustrations you provided are very useful for this kind of graph
>>>> topology problems.
>>>>
>>>> The rules you lay out for package RBF look good to me at first glance
>>>> as there are some subtle improvements compared to BIP 125.
>>>>
>>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>
>>>> I have a question regarding this rule, as your example 2C could be
>>>> concerning for LN (unless I didn't understand it correctly).
>>>>
>>>> This also touches on the package RBF rule 5 ("The package cannot
>>>> replace more than 100 mempool transactions.")
>>>>
>>>> In your example we have a parent transaction A already in the mempool
>>>> and an unrelated child B. We submit a package C + D where C spends
>>>> another of A's inputs. You're highlighting that this package may be
>>>> rejected because of the unrelated transaction(s) B.
>>>>
>>>> The way I see this, an attacker can abuse this rule to ensure
>>>> transaction A stays pinned in the mempool without confirming by
>>>> broadcasting a set of child transactions that reach these limits
>>>> and pay low fees (where A would be a commit tx in LN).
>>>>
>>>> We had to create the CPFP carve-out rule explicitly to work around
>>>> this limitation, and I think it would be necessary for package RBF
>>>> as well, because in such cases we do want to be able to submit a
>>>> package A + C where C pays high fees to speed up A's confirmation,
>>>> regardless of unrelated unconfirmed children of A...
>>>>
>>>> We could submit only C to benefit from the existing CPFP carve-out
>>>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>>>> but other remote mempools do.
>>>>
>>>> Is my concern justified? Is this something that we should dig into a
>>>> bit deeper?
>>>>
>>>> Thanks,
>>>> Bastien
>>>>
>>>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>
>>>>> Hi there,
>>>>>
>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>> package
>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>> would not
>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>> significantly affects transaction propagation, I believe this is
>>>>> relevant for
>>>>> the mailing list.
>>>>>
>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>> child. If you
>>>>> develop software that relies on specific transaction relay assumptions
>>>>> and/or
>>>>> are interested in using package relay in the future, I'm very
>>>>> interested to hear
>>>>> your feedback on the utility or restrictiveness of these package
>>>>> policies for
>>>>> your use cases.
>>>>>
>>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>>> PR#22290][1].
>>>>>
>>>>> An illustrated version of this post can be found at
>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>> I have also linked the images below.
>>>>>
>>>>> ## Background
>>>>>
>>>>> Feel free to skip this section if you are already familiar with
>>>>> mempool policy
>>>>> and package relay terminology.
>>>>>
>>>>> ### Terminology Clarifications
>>>>>
>>>>> * Package = an ordered list of related transactions, representable by
>>>>> a Directed
>>>>>   Acyclic Graph.
>>>>> * Package Feerate = the total modified fees divided by the total
>>>>> virtual size of
>>>>>   all transactions in the package.
>>>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>>>> the user
>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>> across
>>>>> mempools.
>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>> [BIP141
>>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>>> Core][3].
>>>>>     - Note that feerate is not necessarily based on the base fees and
>>>>> serialized
>>>>>       size.
>>>>>
>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>> incentives to
>>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>>> Child Pays
>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention
>>>>> in
>>>>> mempool policy is to recognize when the new transaction is more
>>>>> economical to
>>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>>> some
>>>>> limitations.
>>>>>
>>>>> ### Policy
>>>>>
>>>>> The purpose of the mempool is to store the best (to be most
>>>>> incentive-compatible
>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>> Miners use
>>>>> the mempool to build block templates. The mempool is also useful as a
>>>>> cache for
>>>>> boosting block relay and validation performance, aiding transaction
>>>>> relay, and
>>>>> generating feerate estimations.
>>>>>
>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>> should make it
>>>>> to miners through normal transaction relay, without any special
>>>>> connectivity or
>>>>> relationships with miners. On the other hand, nodes do not have
>>>>> unlimited
>>>>> resources, and a P2P network designed to let any honest node broadcast
>>>>> their
>>>>> transactions also exposes the transaction validation engine to DoS
>>>>> attacks from
>>>>> malicious peers.
>>>>>
>>>>> As such, for unconfirmed transactions we are considering for our
>>>>> mempool, we
>>>>> apply a set of validation rules in addition to consensus, primarily to
>>>>> protect
>>>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>> node-specific) rules that transactions must abide by in order to be
>>>>> accepted
>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>> restrictions such
>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>
>>>>> ### Package Relay and Package Mempool Accept
>>>>>
>>>>> In transaction relay, we currently consider transactions one at a time
>>>>> for
>>>>> submission to the mempool. This creates a limitation in the node's
>>>>> ability to
>>>>> determine which transactions have the highest feerates, since we
>>>>> cannot take
>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>> transactions are
>>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>>> when
>>>>> considering it for RBF. When an individual transaction does not meet
>>>>> the mempool
>>>>> minimum feerate and the user isn't able to create a replacement
>>>>> transaction
>>>>> directly, it will not be accepted by mempools.
>>>>>
>>>>> This limitation presents a security issue for applications and users
>>>>> relying on
>>>>> time-sensitive transactions. For example, Lightning and other
>>>>> protocols create
>>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>>> path opens
>>>>> up after a timelock, and users are protected from cheating scenarios
>>>>> as long as
>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>> parties'
>>>>> transactions will propagate and confirm in a timely manner. This
>>>>> assumption can
>>>>> be broken if fee-bumping does not work as intended.
>>>>>
>>>>> The end goal for Package Relay is to consider multiple transactions at
>>>>> the same
>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>> better
>>>>> determine whether transactions should be accepted to our mempool,
>>>>> especially if
>>>>> they don't meet fee requirements individually or are better RBF
>>>>> candidates as a
>>>>> package. A combination of changes to mempool validation logic, policy,
>>>>> and
>>>>> transaction relay allows us to better propagate the transactions with
>>>>> the
>>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>>> powerful
>>>>> for users.
>>>>>
>>>>> The "relay" part of Package Relay suggests P2P messaging changes, but
>>>>> a large
>>>>> part of the changes are in the mempool's package validation logic. We
>>>>> call this
>>>>> *Package Mempool Accept*.
>>>>>
>>>>> ### Previous Work
>>>>>
>>>>> * Given that mempool validation is DoS-sensitive and complex, it would
>>>>> be
>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>> efforts have
>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>> [#21062][5],
>>>>> [#22675][6], [#22796][7]).
>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>> accepts only
>>>>>   (no submission to mempool).
>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>>>> arbitrary
>>>>>   packages. Still test accepts only.
>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>
>>>>> ### Existing Package Rules
>>>>>
>>>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>>>> consider
>>>>> them as "given" in the rest of this document, though they can be
>>>>> changed, since
>>>>> package validation is test-accept only right now.
>>>>>
>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>
>>>>>    *Rationale*: This is already enforced as mempool
>>>>> ancestor/descendant limits.
>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>> this limit
>>>>> would mean that the package can either be split up or it wouldn't pass
>>>>> this
>>>>> mempool policy.
>>>>>
>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>> between
>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>
>>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>>> can spend
>>>>> the same inputs. This also means there cannot be duplicate
>>>>> transactions. [8]
>>>>>
>>>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>>>> test
>>>>> accept, the union of all of their descendants and ancestors is
>>>>> considered. This
>>>>> is essentially a "worst case" heuristic where every transaction in the
>>>>> package
>>>>> is treated as each other's ancestor and descendant. [8]
>>>>> Packages for which ancestor/descendant limits are accurately captured
>>>>> by this
>>>>> heuristic: [19]
>>>>>
>>>>> There are also limitations such as the fact that CPFP carve out is not
>>>>> applied
>>>>> to package transactions. #20833 also disables RBF in package
>>>>> validation; this
>>>>> proposal overrides that to allow packages to use RBF.
>>>>>
>>>>> ## Proposed Changes
>>>>>
>>>>> The next step in the Package Mempool Accept project is to implement
>>>>> submission
>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>> submission
>>>>> logic before exposing it on P2P.
>>>>>
>>>>> ### Summary
>>>>>
>>>>> - Packages may contain already-in-mempool transactions.
>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>> - Fee-related checks use the package feerate. This means that wallets
>>>>> can
>>>>> create a package that utilizes CPFP.
>>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>>> similar
>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>
>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>> feedback is
>>>>> always welcome.
>>>>>
>>>>> ### Details
>>>>>
>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>
>>>>> A package may contain transactions that are already in the mempool. We
>>>>> remove
>>>>> ("deduplicate") those transactions from the package for the purposes
>>>>> of package
>>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>>> nothing.
>>>>>
>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>> parent to be
>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>> policy and
>>>>> fee market fluctuations. We should not reject or penalize the entire
>>>>> package for
>>>>> an individual transaction as that could be a censorship vector.
>>>>>
>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>
>>>>> Only packages of a specific topology are permitted. Namely, a package
>>>>> is exactly
>>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>>> package
>>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>>> unconfirmed parents, etc. Note that it's possible for the parents to
>>>>> be indirect
>>>>> descendants/ancestors of one another, or for parent and child to share
>>>>> a parent,
>>>>> so we cannot make any other topology assumptions.
>>>>>
>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>> parents
>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>> packages to a
>>>>> defined topology is also easier to reason about and simplifies the
>>>>> validation
>>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>>> as one big
>>>>> transaction, where:
>>>>>
>>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>>> from
>>>>>   confirmed UTXOs
>>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>>> that
>>>>>   aren't spent by other transactions in the package
>>>>>
>>>>> Examples of packages that follow this rule (variations of example A
>>>>> show some
>>>>> possibilities after deduplication): ![image][15]
>>>>>
>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>
>>>>> Package Feerate = the total modified fees divided by the total virtual
>>>>> size of
>>>>> all transactions in the package.
>>>>>
>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>> pre-configured
>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>> feerate, the
>>>>> total package feerate is used instead of the individual feerate. The
>>>>> individual
>>>>> transactions are allowed to be below feerate requirements if the
>>>>> package meets
>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>> can have 0
>>>>> fees but be paid for by the child.
>>>>>
>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>> solving the
>>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>>> applications to adjust their fees at broadcast time instead of
>>>>> overshooting or
>>>>> risking getting stuck/pinned.
>>>>>
>>>>> We use the package feerate of the package *after deduplication*.
>>>>>
>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>> that are
>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>> double-counted for both its individual RBF and package RBF.
>>>>>
>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>> individually before
>>>>> the package in example G. In example F, we can see that the 300vB
>>>>> package pays
>>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>>> bandwidth
>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>> M1, but
>>>>> using P1's fees again during package submission would make it look
>>>>> like a 300sat
>>>>> increase for a 200vB package. Even including its fees and size would
>>>>> not be
>>>>> sufficient in this example, since the 300sat looks like enough for the
>>>>> 300vB
>>>>> package. The calculcation after deduplication is 100sat increase for a
>>>>> package
>>>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>>>> have a
>>>>> size of 100vB.
>>>>>
>>>>> #### Package RBF
>>>>>
>>>>> If a package meets feerate requirements as a package, the parents in
>>>>> the
>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>> child cannot
>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>> same
>>>>> transaction, but in order to be valid, none of the transactions can
>>>>> try to
>>>>> replace an ancestor of another transaction in the same package (which
>>>>> would thus
>>>>> make its inputs unavailable).
>>>>>
>>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>>> propagate
>>>>> as intended if RBF still requires each individual transaction to meet
>>>>> the
>>>>> feerate requirements.
>>>>>
>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>
>>>>> ##### Signaling (Rule #1)
>>>>>
>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>
>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>> package RBF and
>>>>> single transaction acceptance. This would be updated if single
>>>>> transaction
>>>>> validation moves to full RBF.
>>>>>
>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>
>>>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>>>> of the
>>>>> child must be at least as high as the ancestor feerates of every
>>>>> transaction
>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>> replacement
>>>>> transaction may only include an unconfirmed input if that input was
>>>>> included in
>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>> output from a
>>>>> currently-unconfirmed transaction.)"
>>>>>
>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>>> transaction has a higher ancestor score than the original
>>>>> transaction(s) (see
>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>> input can lower the
>>>>> ancestor score of the replacement transaction. P1 is trying to replace
>>>>> M1, and
>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>> and M2 pays
>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>> isolation, P1
>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>> M2, so its
>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>> ancestor
>>>>> feerate, which is 6sat/vB.
>>>>>
>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>> transactions in the package can spend new unconfirmed inputs." Example
>>>>> J [17] shows
>>>>> why, if any of the package transactions have ancestors, package
>>>>> feerate is no
>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>>> is the
>>>>> replacement transaction in an RBF), we're actually interested in the
>>>>> entire
>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>> P1, P2, and
>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>>> only allow
>>>>> the child to have new unconfirmed inputs, either, because it can still
>>>>> cause us
>>>>> to overestimate the package's ancestor score.
>>>>>
>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>> Package RBF
>>>>> less useful, but would also break Package RBF for packages with
>>>>> parents already
>>>>> in the mempool: if a package parent has already been submitted, it
>>>>> would look
>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>> [18], we're
>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>> P3. We must
>>>>> consider the case where one of the parents is already in the mempool
>>>>> (in this
>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>> inputs. However,
>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>> replace M1
>>>>> with this package.
>>>>>
>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>> strict than
>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>> replacement
>>>>> transactions to have a ancestor score at least as high as the original
>>>>> ones. As
>>>>> a result, the entire package is required to be a higher feerate mining
>>>>> candidate
>>>>> than each of the replaced transactions.
>>>>>
>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>> original RBF
>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>
>>>>> ##### Absolute Fee (Rule #3)
>>>>>
>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>> total fees
>>>>> of the package must be higher than the absolute fees of the mempool
>>>>> transactions
>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>> BIP125 Rule #3
>>>>> - an individual transaction in the package may have lower fees than the
>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>>>> child
>>>>> pays for RBF.
>>>>>
>>>>> ##### Feerate (Rule #4)
>>>>>
>>>>> The package must pay for its own bandwidth; the package feerate must
>>>>> be higher
>>>>> than the replaced transactions by at least minimum relay feerate
>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>> differs from
>>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>>> lower
>>>>> feerate than the transaction(s) it is replacing. In fact, it may have
>>>>> 0 fees,
>>>>> and the child pays for RBF.
>>>>>
>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>
>>>>> The package cannot replace more than 100 mempool transactions. This is
>>>>> identical
>>>>> to BIP125 Rule #5.
>>>>>
>>>>> ### Expected FAQs
>>>>>
>>>>> 1. Is it possible for only some of the package to make it into the
>>>>> mempool?
>>>>>
>>>>>    Yes, it is. However, since we evict transactions from the mempool by
>>>>> descendant score and the package child is supposed to be sponsoring
>>>>> the fees of
>>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>>> incentive-compatible. In fact, to be conservative, package validation
>>>>> should
>>>>> begin by trying to submit all of the transactions individually, and
>>>>> only use the
>>>>> package mempool acceptance logic if the parents fail due to low
>>>>> feerate.
>>>>>
>>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>>
>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>> aren't able to
>>>>> tell with 100% confidence if we are looking at a transaction that has
>>>>> already
>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>> historical
>>>>> block data, it's possible to look for it, but this is inefficient, not
>>>>> always
>>>>> possible for pruning nodes, and unnecessary because we're not going to
>>>>> do
>>>>> anything with the transaction anyway. As such, we already have the
>>>>> expectation
>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>> relaying
>>>>> transactions that have already been confirmed. Similarly, we shouldn't
>>>>> be
>>>>> relaying packages that contain already-confirmed transactions.
>>>>>
>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>> [2]:
>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>> [3]:
>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>> [13]:
>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>> [14]:
>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>> [15]:
>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>> [16]:
>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>> [17]:
>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>> [18]:
>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>> [19]:
>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>> [20]:
>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>> _______________________________________________
>>>>> bitcoin-dev mailing list
>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>
>>>>

[-- Attachment #2: Type: text/html, Size: 32921 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-22  7:10         ` Bastien TEINTURIER
@ 2021-09-22 13:26           ` Gloria Zhao
  0 siblings, 0 replies; 16+ messages in thread
From: Gloria Zhao @ 2021-09-22 13:26 UTC (permalink / raw)
  To: Bastien TEINTURIER; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 32046 bytes --]

Hi Bastien,

> A package A + C will be able to replace A' + B regardless of
> the weight of A' + B?

Correct, the weight of A' + B will not prevent A+C from replacing it (as
long as A+C pays enough fees). In example 2C, we would be able to replace A
with a package.

Best,
Gloria

On Wed, Sep 22, 2021 at 8:10 AM Bastien TEINTURIER <bastien@acinq•fr> wrote:

> Great, thanks for this clarification!
>
> Can you confirm that this won't be an issue either with your
> example 2C (in your first set of diagrams)? If I understand it
> correctly it shouldn't, but I'd rather be 100% sure.
>
> A package A + C will be able to replace A' + B regardless of
> the weight of A' + B?
>
> Thanks,
> Bastien
>
> Le mar. 21 sept. 2021 à 18:42, Gloria Zhao <gloriajzhao@gmail•com> a
> écrit :
>
>> Hi Bastien,
>>
>> Excellent diagram :D
>>
>> > Here the issue is that a revoked commitment tx A' is pinned in other
>> > mempools, with a long chain of descendants (or descendants that reach
>> > the maximum replaceable size).
>> > We would really like A + C to be able to replace this pinned A'.
>> > We can't submit individually because A on its own won't replace A'...
>>
>> Right, this is a key motivation for having Package RBF. In this case, A+C
>> can replace A' + B1...B24.
>>
>> Due to the descendant limit (each node operator can increase it on their
>> own node, but the default is 25), A' should have no more than 25
>> descendants, even including CPFP carve out. As long as A only conflicts
>> with A', it won't be trying to replace more than 100 transactions. The
>> proposed package RBF will allow C to pay for A's conflicts, since their
>> package feerate is used in the fee comparisons. A is not a descendant of
>> A', so the existence of B1...B24 does not prevent the replacement.
>>
>> Best,
>> Gloria
>>
>> On Tue, Sep 21, 2021 at 4:18 PM Bastien TEINTURIER <bastien@acinq•fr>
>> wrote:
>>
>>> Hi Gloria,
>>>
>>> > I believe this attack is mitigated as long as we attempt to submit
>>> transactions individually
>>>
>>> Unfortunately not, as there exists a pinning scenario in LN where a
>>> different commit tx is pinned, but you actually can't know which one.
>>>
>>> Since I really like your diagrams, I made one as well to illustrate:
>>>
>>> https://user-images.githubusercontent.com/31281497/134198114-5e9c6857-e8fc-405a-be57-18181d5e54cb.jpg
>>>
>>> Here the issue is that a revoked commitment tx A' is pinned in other
>>> mempools, with a long chain of descendants (or descendants that reach
>>> the maximum replaceable size).
>>>
>>> We would really like A + C to be able to replace this pinned A'.
>>> We can't submit individually because A on its own won't replace A'...
>>>
>>> > I would note that this proposal doesn't accommodate something like
>>> diagram B, where C is getting CPFP carve out and wants to bring a +1
>>>
>>> No worries, that case shouldn't be a concern.
>>> I believe any L2 protocol can always ensure it confirms such tx trees
>>> "one depth after the other" without impacting funds safety, so it
>>> only needs to ensure A + C can get into mempools.
>>>
>>> Thanks,
>>> Bastien
>>>
>>> Le mar. 21 sept. 2021 à 13:18, Gloria Zhao <gloriajzhao@gmail•com> a
>>> écrit :
>>>
>>>> Hi Bastien,
>>>>
>>>> Thank you for your feedback!
>>>>
>>>> > In your example we have a parent transaction A already in the mempool
>>>> > and an unrelated child B. We submit a package C + D where C spends
>>>> > another of A's inputs. You're highlighting that this package may be
>>>> > rejected because of the unrelated transaction(s) B.
>>>>
>>>> > The way I see this, an attacker can abuse this rule to ensure
>>>> > transaction A stays pinned in the mempool without confirming by
>>>> > broadcasting a set of child transactions that reach these limits
>>>> > and pay low fees (where A would be a commit tx in LN).
>>>>
>>>> I believe you are describing a pinning attack in which your adversarial
>>>> counterparty attempts to monopolize the mempool descendant limit of the
>>>> shared  transaction A in order to prevent you from submitting a fee-bumping
>>>> child C; I've tried to illustrate this as diagram A here:
>>>> https://user-images.githubusercontent.com/25183001/134159860-068080d0-bbb6-4356-ae74-00df00644c74.png
>>>> (please let me know if I'm misunderstanding).
>>>>
>>>> I believe this attack is mitigated as long as we attempt to submit
>>>> transactions individually (and thus take advantage of CPFP carve out)
>>>> before attempting package validation. So, in scenario A2, even if the
>>>> mempool receives a package with A+C, it would deduplicate A, submit C as an
>>>> individual transaction, and allow it due to the CPFP carve out exemption. A
>>>> more general goal is: if a transaction would propagate successfully on its
>>>> own now, it should still propagate regardless of whether it is included in
>>>> a package. The best way to ensure this, as far as I can tell, is to always
>>>> try to submit them individually first.
>>>>
>>>> I would note that this proposal doesn't accommodate something like
>>>> diagram B, where C is getting CPFP carve out and wants to bring a +1 (e.g.
>>>> C has very low fees and is bumped by D). I don't think this is a use case
>>>> since C should be the one fee-bumping A, but since we're talking about
>>>> limitations around the CPFP carve out, this is it.
>>>>
>>>> Let me know if this addresses your concerns?
>>>>
>>>> Thanks,
>>>> Gloria
>>>>
>>>> On Mon, Sep 20, 2021 at 10:19 AM Bastien TEINTURIER <bastien@acinq•fr>
>>>> wrote:
>>>>
>>>>> Hi Gloria,
>>>>>
>>>>> Thanks for this detailed post!
>>>>>
>>>>> The illustrations you provided are very useful for this kind of graph
>>>>> topology problems.
>>>>>
>>>>> The rules you lay out for package RBF look good to me at first glance
>>>>> as there are some subtle improvements compared to BIP 125.
>>>>>
>>>>> > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>> > `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>
>>>>> I have a question regarding this rule, as your example 2C could be
>>>>> concerning for LN (unless I didn't understand it correctly).
>>>>>
>>>>> This also touches on the package RBF rule 5 ("The package cannot
>>>>> replace more than 100 mempool transactions.")
>>>>>
>>>>> In your example we have a parent transaction A already in the mempool
>>>>> and an unrelated child B. We submit a package C + D where C spends
>>>>> another of A's inputs. You're highlighting that this package may be
>>>>> rejected because of the unrelated transaction(s) B.
>>>>>
>>>>> The way I see this, an attacker can abuse this rule to ensure
>>>>> transaction A stays pinned in the mempool without confirming by
>>>>> broadcasting a set of child transactions that reach these limits
>>>>> and pay low fees (where A would be a commit tx in LN).
>>>>>
>>>>> We had to create the CPFP carve-out rule explicitly to work around
>>>>> this limitation, and I think it would be necessary for package RBF
>>>>> as well, because in such cases we do want to be able to submit a
>>>>> package A + C where C pays high fees to speed up A's confirmation,
>>>>> regardless of unrelated unconfirmed children of A...
>>>>>
>>>>> We could submit only C to benefit from the existing CPFP carve-out
>>>>> rule, but that wouldn't work if our local mempool doesn't have A yet,
>>>>> but other remote mempools do.
>>>>>
>>>>> Is my concern justified? Is this something that we should dig into a
>>>>> bit deeper?
>>>>>
>>>>> Thanks,
>>>>> Bastien
>>>>>
>>>>> Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev <
>>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>>
>>>>>> Hi there,
>>>>>>
>>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>>> package
>>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>>> would not
>>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>>> significantly affects transaction propagation, I believe this is
>>>>>> relevant for
>>>>>> the mailing list.
>>>>>>
>>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>>> child. If you
>>>>>> develop software that relies on specific transaction relay
>>>>>> assumptions and/or
>>>>>> are interested in using package relay in the future, I'm very
>>>>>> interested to hear
>>>>>> your feedback on the utility or restrictiveness of these package
>>>>>> policies for
>>>>>> your use cases.
>>>>>>
>>>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>>>> PR#22290][1].
>>>>>>
>>>>>> An illustrated version of this post can be found at
>>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>>> I have also linked the images below.
>>>>>>
>>>>>> ## Background
>>>>>>
>>>>>> Feel free to skip this section if you are already familiar with
>>>>>> mempool policy
>>>>>> and package relay terminology.
>>>>>>
>>>>>> ### Terminology Clarifications
>>>>>>
>>>>>> * Package = an ordered list of related transactions, representable by
>>>>>> a Directed
>>>>>>   Acyclic Graph.
>>>>>> * Package Feerate = the total modified fees divided by the total
>>>>>> virtual size of
>>>>>>   all transactions in the package.
>>>>>>     - Modified fees = a transaction's base fees + fee delta applied
>>>>>> by the user
>>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>>> across
>>>>>> mempools.
>>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>>> [BIP141
>>>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>>>> Core][3].
>>>>>>     - Note that feerate is not necessarily based on the base fees and
>>>>>> serialized
>>>>>>       size.
>>>>>>
>>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>>> incentives to
>>>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>>>> Child Pays
>>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our
>>>>>> intention in
>>>>>> mempool policy is to recognize when the new transaction is more
>>>>>> economical to
>>>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>>>> some
>>>>>> limitations.
>>>>>>
>>>>>> ### Policy
>>>>>>
>>>>>> The purpose of the mempool is to store the best (to be most
>>>>>> incentive-compatible
>>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>>> Miners use
>>>>>> the mempool to build block templates. The mempool is also useful as a
>>>>>> cache for
>>>>>> boosting block relay and validation performance, aiding transaction
>>>>>> relay, and
>>>>>> generating feerate estimations.
>>>>>>
>>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>>> should make it
>>>>>> to miners through normal transaction relay, without any special
>>>>>> connectivity or
>>>>>> relationships with miners. On the other hand, nodes do not have
>>>>>> unlimited
>>>>>> resources, and a P2P network designed to let any honest node
>>>>>> broadcast their
>>>>>> transactions also exposes the transaction validation engine to DoS
>>>>>> attacks from
>>>>>> malicious peers.
>>>>>>
>>>>>> As such, for unconfirmed transactions we are considering for our
>>>>>> mempool, we
>>>>>> apply a set of validation rules in addition to consensus, primarily
>>>>>> to protect
>>>>>> us from resource exhaustion and aid our efforts to keep the highest
>>>>>> fee
>>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>>> node-specific) rules that transactions must abide by in order to be
>>>>>> accepted
>>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>>> restrictions such
>>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>>
>>>>>> ### Package Relay and Package Mempool Accept
>>>>>>
>>>>>> In transaction relay, we currently consider transactions one at a
>>>>>> time for
>>>>>> submission to the mempool. This creates a limitation in the node's
>>>>>> ability to
>>>>>> determine which transactions have the highest feerates, since we
>>>>>> cannot take
>>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>>> transactions are
>>>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>>>> when
>>>>>> considering it for RBF. When an individual transaction does not meet
>>>>>> the mempool
>>>>>> minimum feerate and the user isn't able to create a replacement
>>>>>> transaction
>>>>>> directly, it will not be accepted by mempools.
>>>>>>
>>>>>> This limitation presents a security issue for applications and users
>>>>>> relying on
>>>>>> time-sensitive transactions. For example, Lightning and other
>>>>>> protocols create
>>>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>>>> path opens
>>>>>> up after a timelock, and users are protected from cheating scenarios
>>>>>> as long as
>>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>>> parties'
>>>>>> transactions will propagate and confirm in a timely manner. This
>>>>>> assumption can
>>>>>> be broken if fee-bumping does not work as intended.
>>>>>>
>>>>>> The end goal for Package Relay is to consider multiple transactions
>>>>>> at the same
>>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>>> better
>>>>>> determine whether transactions should be accepted to our mempool,
>>>>>> especially if
>>>>>> they don't meet fee requirements individually or are better RBF
>>>>>> candidates as a
>>>>>> package. A combination of changes to mempool validation logic,
>>>>>> policy, and
>>>>>> transaction relay allows us to better propagate the transactions with
>>>>>> the
>>>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>>>> powerful
>>>>>> for users.
>>>>>>
>>>>>> The "relay" part of Package Relay suggests P2P messaging changes, but
>>>>>> a large
>>>>>> part of the changes are in the mempool's package validation logic. We
>>>>>> call this
>>>>>> *Package Mempool Accept*.
>>>>>>
>>>>>> ### Previous Work
>>>>>>
>>>>>> * Given that mempool validation is DoS-sensitive and complex, it
>>>>>> would be
>>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>>> efforts have
>>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>>> [#21062][5],
>>>>>> [#22675][6], [#22796][7]).
>>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>>> accepts only
>>>>>>   (no submission to mempool).
>>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks
>>>>>> for arbitrary
>>>>>>   packages. Still test accepts only.
>>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>>
>>>>>> ### Existing Package Rules
>>>>>>
>>>>>> These are in master as introduced in [#20833][8] and [#21800][9].
>>>>>> I'll consider
>>>>>> them as "given" in the rest of this document, though they can be
>>>>>> changed, since
>>>>>> package validation is test-accept only right now.
>>>>>>
>>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>>
>>>>>>    *Rationale*: This is already enforced as mempool
>>>>>> ancestor/descendant limits.
>>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>>> this limit
>>>>>> would mean that the package can either be split up or it wouldn't
>>>>>> pass this
>>>>>> mempool policy.
>>>>>>
>>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>>> between
>>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>>
>>>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>>>> can spend
>>>>>> the same inputs. This also means there cannot be duplicate
>>>>>> transactions. [8]
>>>>>>
>>>>>> 4. When packages are evaluated against ancestor/descendant limits in
>>>>>> a test
>>>>>> accept, the union of all of their descendants and ancestors is
>>>>>> considered. This
>>>>>> is essentially a "worst case" heuristic where every transaction in
>>>>>> the package
>>>>>> is treated as each other's ancestor and descendant. [8]
>>>>>> Packages for which ancestor/descendant limits are accurately captured
>>>>>> by this
>>>>>> heuristic: [19]
>>>>>>
>>>>>> There are also limitations such as the fact that CPFP carve out is
>>>>>> not applied
>>>>>> to package transactions. #20833 also disables RBF in package
>>>>>> validation; this
>>>>>> proposal overrides that to allow packages to use RBF.
>>>>>>
>>>>>> ## Proposed Changes
>>>>>>
>>>>>> The next step in the Package Mempool Accept project is to implement
>>>>>> submission
>>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>>> submission
>>>>>> logic before exposing it on P2P.
>>>>>>
>>>>>> ### Summary
>>>>>>
>>>>>> - Packages may contain already-in-mempool transactions.
>>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>>> - Fee-related checks use the package feerate. This means that wallets
>>>>>> can
>>>>>> create a package that utilizes CPFP.
>>>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>>>> similar
>>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>>
>>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>>> feedback is
>>>>>> always welcome.
>>>>>>
>>>>>> ### Details
>>>>>>
>>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>>
>>>>>> A package may contain transactions that are already in the mempool.
>>>>>> We remove
>>>>>> ("deduplicate") those transactions from the package for the purposes
>>>>>> of package
>>>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>>>> nothing.
>>>>>>
>>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>>> parent to be
>>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>>> policy and
>>>>>> fee market fluctuations. We should not reject or penalize the entire
>>>>>> package for
>>>>>> an individual transaction as that could be a censorship vector.
>>>>>>
>>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>>
>>>>>> Only packages of a specific topology are permitted. Namely, a package
>>>>>> is exactly
>>>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>>>> package
>>>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>>>> unconfirmed parents, etc. Note that it's possible for the parents to
>>>>>> be indirect
>>>>>> descendants/ancestors of one another, or for parent and child to
>>>>>> share a parent,
>>>>>> so we cannot make any other topology assumptions.
>>>>>>
>>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>>> parents
>>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>>> packages to a
>>>>>> defined topology is also easier to reason about and simplifies the
>>>>>> validation
>>>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>>>> as one big
>>>>>> transaction, where:
>>>>>>
>>>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>>>> from
>>>>>>   confirmed UTXOs
>>>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>>>> that
>>>>>>   aren't spent by other transactions in the package
>>>>>>
>>>>>> Examples of packages that follow this rule (variations of example A
>>>>>> show some
>>>>>> possibilities after deduplication): ![image][15]
>>>>>>
>>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>>
>>>>>> Package Feerate = the total modified fees divided by the total
>>>>>> virtual size of
>>>>>> all transactions in the package.
>>>>>>
>>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>>> pre-configured
>>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>>> feerate, the
>>>>>> total package feerate is used instead of the individual feerate. The
>>>>>> individual
>>>>>> transactions are allowed to be below feerate requirements if the
>>>>>> package meets
>>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>>> can have 0
>>>>>> fees but be paid for by the child.
>>>>>>
>>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>>> solving the
>>>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>>>> applications to adjust their fees at broadcast time instead of
>>>>>> overshooting or
>>>>>> risking getting stuck/pinned.
>>>>>>
>>>>>> We use the package feerate of the package *after deduplication*.
>>>>>>
>>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>>> that are
>>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>>> double-counted for both its individual RBF and package RBF.
>>>>>>
>>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>>> individually before
>>>>>> the package in example G. In example F, we can see that the 300vB
>>>>>> package pays
>>>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>>>> bandwidth
>>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>>> M1, but
>>>>>> using P1's fees again during package submission would make it look
>>>>>> like a 300sat
>>>>>> increase for a 200vB package. Even including its fees and size would
>>>>>> not be
>>>>>> sufficient in this example, since the 300sat looks like enough for
>>>>>> the 300vB
>>>>>> package. The calculcation after deduplication is 100sat increase for
>>>>>> a package
>>>>>> of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>> transactions have a
>>>>>> size of 100vB.
>>>>>>
>>>>>> #### Package RBF
>>>>>>
>>>>>> If a package meets feerate requirements as a package, the parents in
>>>>>> the
>>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>>> child cannot
>>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>>> same
>>>>>> transaction, but in order to be valid, none of the transactions can
>>>>>> try to
>>>>>> replace an ancestor of another transaction in the same package (which
>>>>>> would thus
>>>>>> make its inputs unavailable).
>>>>>>
>>>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>>>> propagate
>>>>>> as intended if RBF still requires each individual transaction to meet
>>>>>> the
>>>>>> feerate requirements.
>>>>>>
>>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>>
>>>>>> ##### Signaling (Rule #1)
>>>>>>
>>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>>
>>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>>> package RBF and
>>>>>> single transaction acceptance. This would be updated if single
>>>>>> transaction
>>>>>> validation moves to full RBF.
>>>>>>
>>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>>
>>>>>> A package may include new unconfirmed inputs, but the ancestor
>>>>>> feerate of the
>>>>>> child must be at least as high as the ancestor feerates of every
>>>>>> transaction
>>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>>> replacement
>>>>>> transaction may only include an unconfirmed input if that input was
>>>>>> included in
>>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>>> output from a
>>>>>> currently-unconfirmed transaction.)"
>>>>>>
>>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>>>> transaction has a higher ancestor score than the original
>>>>>> transaction(s) (see
>>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>>> input can lower the
>>>>>> ancestor score of the replacement transaction. P1 is trying to
>>>>>> replace M1, and
>>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>>> and M2 pays
>>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>>> isolation, P1
>>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>>> M2, so its
>>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>>> ancestor
>>>>>> feerate, which is 6sat/vB.
>>>>>>
>>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>>> transactions in the package can spend new unconfirmed inputs."
>>>>>> Example J [17] shows
>>>>>> why, if any of the package transactions have ancestors, package
>>>>>> feerate is no
>>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>>>> is the
>>>>>> replacement transaction in an RBF), we're actually interested in the
>>>>>> entire
>>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>>> P1, P2, and
>>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>>>> only allow
>>>>>> the child to have new unconfirmed inputs, either, because it can
>>>>>> still cause us
>>>>>> to overestimate the package's ancestor score.
>>>>>>
>>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>>> Package RBF
>>>>>> less useful, but would also break Package RBF for packages with
>>>>>> parents already
>>>>>> in the mempool: if a package parent has already been submitted, it
>>>>>> would look
>>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>>> [18], we're
>>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>>> P3. We must
>>>>>> consider the case where one of the parents is already in the mempool
>>>>>> (in this
>>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>>> inputs. However,
>>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>>> replace M1
>>>>>> with this package.
>>>>>>
>>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>>> strict than
>>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>>> replacement
>>>>>> transactions to have a ancestor score at least as high as the
>>>>>> original ones. As
>>>>>> a result, the entire package is required to be a higher feerate
>>>>>> mining candidate
>>>>>> than each of the replaced transactions.
>>>>>>
>>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>>> original RBF
>>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>>
>>>>>> ##### Absolute Fee (Rule #3)
>>>>>>
>>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>>> total fees
>>>>>> of the package must be higher than the absolute fees of the mempool
>>>>>> transactions
>>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>>> BIP125 Rule #3
>>>>>> - an individual transaction in the package may have lower fees than
>>>>>> the
>>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and
>>>>>> the child
>>>>>> pays for RBF.
>>>>>>
>>>>>> ##### Feerate (Rule #4)
>>>>>>
>>>>>> The package must pay for its own bandwidth; the package feerate must
>>>>>> be higher
>>>>>> than the replaced transactions by at least minimum relay feerate
>>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>>> differs from
>>>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>>>> lower
>>>>>> feerate than the transaction(s) it is replacing. In fact, it may have
>>>>>> 0 fees,
>>>>>> and the child pays for RBF.
>>>>>>
>>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>>
>>>>>> The package cannot replace more than 100 mempool transactions. This
>>>>>> is identical
>>>>>> to BIP125 Rule #5.
>>>>>>
>>>>>> ### Expected FAQs
>>>>>>
>>>>>> 1. Is it possible for only some of the package to make it into the
>>>>>> mempool?
>>>>>>
>>>>>>    Yes, it is. However, since we evict transactions from the mempool
>>>>>> by
>>>>>> descendant score and the package child is supposed to be sponsoring
>>>>>> the fees of
>>>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>>>> incentive-compatible. In fact, to be conservative, package validation
>>>>>> should
>>>>>> begin by trying to submit all of the transactions individually, and
>>>>>> only use the
>>>>>> package mempool acceptance logic if the parents fail due to low
>>>>>> feerate.
>>>>>>
>>>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>>>
>>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>>> aren't able to
>>>>>> tell with 100% confidence if we are looking at a transaction that has
>>>>>> already
>>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>>> historical
>>>>>> block data, it's possible to look for it, but this is inefficient,
>>>>>> not always
>>>>>> possible for pruning nodes, and unnecessary because we're not going
>>>>>> to do
>>>>>> anything with the transaction anyway. As such, we already have the
>>>>>> expectation
>>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>>> relaying
>>>>>> transactions that have already been confirmed. Similarly, we
>>>>>> shouldn't be
>>>>>> relaying packages that contain already-confirmed transactions.
>>>>>>
>>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>>> [2]:
>>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>>> [3]:
>>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>>> [13]:
>>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>>> [14]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>>> [15]:
>>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>>> [16]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>>> [17]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>>> [18]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>>> [19]:
>>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>>> [20]:
>>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>>> _______________________________________________
>>>>>> bitcoin-dev mailing list
>>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>>
>>>>>

[-- Attachment #2: Type: text/html, Size: 33705 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-20 15:10   ` Gloria Zhao
@ 2021-09-23  4:29     ` Antoine Riard
  2021-09-23 15:36       ` Gloria Zhao
  0 siblings, 1 reply; 16+ messages in thread
From: Antoine Riard @ 2021-09-23  4:29 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 49511 bytes --]

> Correct, if B+C is too low feerate to be accepted, we will reject it. I
> prefer this because it is incentive compatible: A can be mined by itself,
> so there's no reason to prefer A+B+C instead of A.
> As another way of looking at this, consider the case where we do accept
> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
> capacity, we evict the lowest descendant feerate transactions, which are
> B+C in this case. This gives us the same resulting mempool, with A and not
> B+C.

I agree here. Doing otherwise, we might evict other transactions mempool in
`MempoolAccept::Finalize` with a higher-feerate than B+C while those
evicted transactions are the most compelling for block construction.

I thought at first missing this acceptance requirement would break a
fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
child fee is capturing the parent value. I can't think of other fee-bumping
schemes potentially affected. If they do exist I would say they're wrong in
their design assumptions.

> If or when we have witness replacement, the logic is: if the individual
> transaction is enough to replace the mempool one, the replacement will
> happen during the preceding individual transaction acceptance, and
> deduplication logic will work. Otherwise, we will try to deduplicate by
> wtxid, see that we need a package witness replacement, and use the package
> feerate to evaluate whether this is economically rational.

IIUC, you have package A+B, during the dedup phase early in
`AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
and A' is higher feerate than A, you trim A and replace by A' ?

I think this approach is safe, the one who appears unsafe to me is when A'
has a _lower_ feerate, even if A' is already accepted by our mempool ? In
that case iirc that would be a pinning.

Good to see progress on witness replacement before we see usage of Taproot
tree in the context of multi-party, where a malicious counterparty inflates
its witness to jam a honest spending.

(Note, the commit linked currently points nowhere :))


> Please note that A may replace A' even if A' has higher fees than A
> individually, because the proposed package RBF utilizes the fees and size
> of the entire package. This just requires E to pay enough fees, although
> this can be pretty high if there are also potential B' and C' competing
> commitment transactions that we don't know about.

Ah right, if the package acceptance waives `PaysMoreThanConflicts` for the
individual check on A, the honest package should replace the pinning
attempt. I've not fully parsed the proposed implementation yet.

Though note, I think it's still unsafe for a Lightning
multi-commitment-broadcast-as-one-package as a malicious A' might have an
absolute fee higher than E. It sounds uneconomical for
an attacker but I think it's not when you consider than you can "batch"
attack against multiple honest counterparties. E.g, Mallory broadcast A' +
B' + C' + D' where A' conflicts with Alice's honest package P1, B'
conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
package P3. And D' is a high-fee child of A' + B' + C'.

If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
confirmed by P1+P2+P3, I think it's lucrative for the attacker ?

> So far, my understanding is that multi-parent-1-child is desired for
> batched fee-bumping (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
> I've also seen your response which I have less context on (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
That
> being said, I am happy to create a new proposal for 1 parent + 1 child
> (which would be slightly simpler) and plan for moving to
> multi-parent-1-child later if that is preferred. I am very interested in
> hearing feedback on that approach.

I think batched fee-bumping is okay as long as you don't have
time-sensitive outputs encumbering your commitment transactions. For the
reasons mentioned above, I think that's unsafe.

What I'm worried about is  L2 developers, potentially not aware about all
the mempool subtleties blurring the difference and always batching their
broadcast by default.

IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
constraint L2 design space for now and minimize risks of unsafe usage of
the package API :)

I think that's a point where it would be relevant to have the opinion of
more L2 devs.

> I think there is a misunderstanding here - let me describe what I'm
> proposing we'd do in this situation: we'll try individual submission for
A,
> see that it fails due to "insufficient fees." Then, we'll try package
> validation for A+B and use package RBF. If A+B pays enough, it can still
> replace A'. If A fails for a bad signature, we won't look at B or A+B.
Does
> this meet your expectations?

Yes there was a misunderstanding, I think this approach is correct, it's
more a question of performance. Do we assume that broadcasted packages are
"honest" by default and that the parent(s) always need the child to pass
the fee checks, that way saving the processing of individual transactions
which are expected to fail in 99% of cases or more ad hoc composition of
packages at relay ?

I think this point is quite dependent on the p2p packages format/logic
we'll end up on and that we should feel free to revisit it later ?


> What problem are you trying to solve by the package feerate *after* dedup
rule ?
> My understanding is that an in-package transaction might be already in
the mempool. Therefore, to compute a correct RBF penalty replacement, the
vsize of this transaction could be discarded lowering the cost of package
RBF.

> I'm proposing that, when a transaction has already been submitted to
> mempool, we would ignore both its fees and vsize when calculating package
> feerate.

Yes, if you receive A+B, and A is already in-mempoo, I agree you can
discard its feerate as B should pay for all fees checked on its own. Where
I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
have a fee high enough to cover the bandwidth penalty replacement
(`PaysForRBF`, 2nd check) of both A+B' or only B' ?

If you have a second-layer like current Lightning, you might have a
counterparty commitment to replace and should always expect to have to pay
for parent replacement bandwidth.

Where a potential discount sounds interesting is when you have an univoque
state on the first-stage of transactions. E.g DLC's funding transaction
which might be CPFP by any participant iirc.

> Note that, if C' conflicts with C, it also conflicts with D, since D is a
> descendant of C and would thus need to be evicted along with it.

Ah once again I think it's a misunderstanding without the code under my
eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark
for potential eviction D and don't consider it for future conflicts in the
rest of the package, I think D' `PreChecks` should be good ?

> More generally, this example is surprising to me because I didn't think
> packages would be used to fee-bump replaceable transactions. Do we want
the
> child to be able to replace mempool transactions as well?

If we mean when you have replaceable A+B then A'+B' try to replace with a
higher-feerate ? I think that's exactly the case we need for Lightning as
A+B is coming from Alice and A'+B' is coming from Bob :/

> I'm not sure what you mean? Let's say we have a package of parent A +
child
> B, where A is supposed to replace a mempool transaction A'. Are you saying
> that counterparties are able to malleate the package child B, or a child
of
> A'?

The second option, a child of A', In the LN case I think the CPFP is
attached on one's anchor output.

I think it's good if we assume the
solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
inherited signaling or full-rbf ?

> Sorry, I don't understand what you mean by "preserve the package
> integrity?" Could you elaborate?

After thinking the relaxation about the "new" unconfirmed input is not
linked to trimming but I would say more to the multi-parent support.

Let's say you have A+B trying to replace C+D where B is also spending
already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
input as D isn't spending E.

So good, I think we agree on the problem description here.

> I am in agreement with your calculations but unsure if we disagree on the
> expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
> it fails the proposed package RBF Rule #2, so this package would be
> rejected. Does this meet your expectations?

Well what sounds odd to me, in my example, we fail D even if it has a
higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
fees are 4500 sats ?

Is this compatible with a model where a miner prioritizes absolute fees
over ancestor score, in the case that mempools aren't full-enough to
fulfill a block ?

Let me know if I can clarify a point.

Antoine

Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a écrit :

>
> Hi Antoine,
>
> First of all, thank you for the thorough review. I appreciate your insight
> on LN requirements.
>
> > IIUC, you have a package A+B+C submitted for acceptance and A is already
> in your mempool. You trim out A from the package and then evaluate B+C.
>
> > I think this might be an issue if A is the higher-fee element of the ABC
> package. B+C package fees might be under the mempool min fee and will be
> rejected, potentially breaking the acceptance expectations of the package
> issuer ?
>
> Correct, if B+C is too low feerate to be accepted, we will reject it. I
> prefer this because it is incentive compatible: A can be mined by itself,
> so there's no reason to prefer A+B+C instead of A.
> As another way of looking at this, consider the case where we do accept
> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
> capacity, we evict the lowest descendant feerate transactions, which are
> B+C in this case. This gives us the same resulting mempool, with A and not
> B+C.
>
>
> > Further, I think the dedup should be done on wtxid, as you might have
> multiple valid witnesses. Though with varying vsizes and as such offering
> different feerates.
>
> I agree that variations of the same package with different witnesses is a
> case that must be handled. I consider witness replacement to be a project
> that can be done in parallel to package mempool acceptance because being
> able to accept packages does not worsen the problem of a
> same-txid-different-witness "pinning" attack.
>
> If or when we have witness replacement, the logic is: if the individual
> transaction is enough to replace the mempool one, the replacement will
> happen during the preceding individual transaction acceptance, and
> deduplication logic will work. Otherwise, we will try to deduplicate by
> wtxid, see that we need a package witness replacement, and use the package
> feerate to evaluate whether this is economically rational.
>
> See the #22290 "handle package transactions already in mempool" commit (
> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
> which handles the case of same-txid-different-witness by simply using the
> transaction in the mempool for now, with TODOs for what I just described.
>
>
> > I'm not clearly understanding the accepted topologies. By "parent and
> child to share a parent", do you mean the set of transactions A, B, C,
> where B is spending A and C is spending A and B would be correct ?
>
> Yes, that is what I meant. Yes, that would a valid package under these
> rules.
>
> > If yes, is there a width-limit introduced or we fallback on
> MAX_PACKAGE_COUNT=25 ?
>
> No, there is no limit on connectivity other than "child with all
> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
> in-mempool + in-package ancestor limits.
>
>
> > Considering the current Core's mempool acceptance rules, I think CPFP
> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
> jamming successful on one channel commitment transaction would contamine
> the remaining commitments sharing the same package.
>
> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
> transactions and E a shared CPFP. If a malicious A' transaction has a
> better feerate than A, the whole package acceptance will fail. Even if A'
> confirms in the following block,
> the propagation and confirmation of B+C+D have been delayed. This could
> carry on a loss of funds.
>
> Please note that A may replace A' even if A' has higher fees than A
> individually, because the proposed package RBF utilizes the fees and size
> of the entire package. This just requires E to pay enough fees, although
> this can be pretty high if there are also potential B' and C' competing
> commitment transactions that we don't know about.
>
>
> > IMHO, I'm leaning towards deploying during a first phase
> 1-parent/1-child. I think it's the most conservative step still improving
> second-layer safety.
>
> So far, my understanding is that multi-parent-1-child is desired for
> batched fee-bumping (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
> I've also seen your response which I have less context on (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
> That being said, I am happy to create a new proposal for 1 parent + 1 child
> (which would be slightly simpler) and plan for moving to
> multi-parent-1-child later if that is preferred. I am very interested in
> hearing feedback on that approach.
>
>
> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
> fails. For this reason I think the individual RBF should be bypassed and
> only the package RBF apply ?
>
> I think there is a misunderstanding here - let me describe what I'm
> proposing we'd do in this situation: we'll try individual submission for A,
> see that it fails due to "insufficient fees." Then, we'll try package
> validation for A+B and use package RBF. If A+B pays enough, it can still
> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
> this meet your expectations?
>
>
> > What problem are you trying to solve by the package feerate *after*
> dedup rule ?
> > My understanding is that an in-package transaction might be already in
> the mempool. Therefore, to compute a correct RBF penalty replacement, the
> vsize of this transaction could be discarded lowering the cost of package
> RBF.
>
> I'm proposing that, when a transaction has already been submitted to
> mempool, we would ignore both its fees and vsize when calculating package
> feerate. In example G2, we shouldn't count M1 fees after its submission to
> mempool, since M1's fees have already been used to pay for its individual
> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
> We also shouldn't count its vsize, since it has already been paid for.
>
>
> > I think this is a footgunish API, as if a package issuer send the
> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
> Then try to broadcast the higher-feerate C'+D' package, it should be
> rejected. So it's breaking the naive broadcaster assumption that a
> higher-feerate/higher-fee package always replaces ?
>
> Note that, if C' conflicts with C, it also conflicts with D, since D is a
> descendant of C and would thus need to be evicted along with it.
> Implicitly, D' would not be in conflict with D.
> More generally, this example is surprising to me because I didn't think
> packages would be used to fee-bump replaceable transactions. Do we want the
> child to be able to replace mempool transactions as well? This can be
> implemented with a bit of additional logic.
>
> > I think this is unsafe for L2s if counterparties have malleability of
> the child transaction. They can block your package replacement by
> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
> ability.
>
> I'm not sure what you mean? Let's say we have a package of parent A +
> child B, where A is supposed to replace a mempool transaction A'. Are you
> saying that counterparties are able to malleate the package child B, or a
> child of A'? If they can malleate a child of A', that shouldn't matter as
> long as A' is signaling replacement. This would be handled identically with
> full RBF and what Core currently implements.
>
> > I think this is an issue brought by the trimming during the dedup phase.
> If we preserve the package integrity, only re-using the tx-level checks
> results of already in-mempool transactions to gain in CPU time we won't
> have this issue. Package childs can add unconfirmed inputs as long as
> they're in-package, the bip125 rule2 is only evaluated against parents ?
>
> Sorry, I don't understand what you mean by "preserve the package
> integrity?" Could you elaborate?
>
> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
> sat/vb for 1000 vbytes.
>
> > Package A + B ancestor score is 10 sat/vb.
>
> > D has a higher feerate/absolute fee than B.
>
> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>
> I am in agreement with your calculations but unsure if we disagree on the
> expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
> it fails the proposed package RBF Rule #2, so this package would be
> rejected. Does this meet your expectations?
>
> Thank you for linking to projects that might be interested in package
> relay :)
>
> Thanks,
> Gloria
>
> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail•com>
> wrote:
>
>> Hi Gloria,
>>
>> > A package may contain transactions that are already in the mempool. We
>> > remove
>> > ("deduplicate") those transactions from the package for the purposes of
>> > package
>> > mempool acceptance. If a package is empty after deduplication, we do
>> > nothing.
>>
>> IIUC, you have a package A+B+C submitted for acceptance and A is already
>> in your mempool. You trim out A from the package and then evaluate B+C.
>>
>> I think this might be an issue if A is the higher-fee element of the ABC
>> package. B+C package fees might be under the mempool min fee and will be
>> rejected, potentially breaking the acceptance expectations of the package
>> issuer ?
>>
>> Further, I think the dedup should be done on wtxid, as you might have
>> multiple valid witnesses. Though with varying vsizes and as such offering
>> different feerates.
>>
>> E.g you're going to evaluate the package A+B and A' is already in your
>> mempool with a bigger valid witness. You trim A based on txid, then you
>> evaluate A'+B, which fails the fee checks. However, evaluating A+B would
>> have been a success.
>>
>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>> achieve the same goal by bypassing tx-level checks for already-in txn while
>> conserving the package integrity for package-level checks ?
>>
>> > Note that it's possible for the parents to be
>> > indirect
>> > descendants/ancestors of one another, or for parent and child to share a
>> > parent,
>> > so we cannot make any other topology assumptions.
>>
>> I'm not clearly understanding the accepted topologies. By "parent and
>> child to share a parent", do you mean the set of transactions A, B, C,
>> where B is spending A and C is spending A and B would be correct ?
>>
>> If yes, is there a width-limit introduced or we fallback on
>> MAX_PACKAGE_COUNT=25 ?
>>
>> IIRC, one rationale to come with this topology limitation was to lower
>> the DoS risks when potentially deploying p2p packages.
>>
>> Considering the current Core's mempool acceptance rules, I think CPFP
>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>> jamming successful on one channel commitment transaction would contamine
>> the remaining commitments sharing the same package.
>>
>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>> transactions and E a shared CPFP. If a malicious A' transaction has a
>> better feerate than A, the whole package acceptance will fail. Even if A'
>> confirms in the following block,
>> the propagation and confirmation of B+C+D have been delayed. This could
>> carry on a loss of funds.
>>
>> That said, if you're broadcasting commitment transactions without
>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>> saving as you don't have to duplicate the CPFP.
>>
>> IMHO, I'm leaning towards deploying during a first phase
>> 1-parent/1-child. I think it's the most conservative step still improving
>> second-layer safety.
>>
>> > *Rationale*:  It would be incorrect to use the fees of transactions
>> that are
>> > already in the mempool, as we do not want a transaction's fees to be
>> > double-counted for both its individual RBF and package RBF.
>>
>> I'm unsure about the logical order of the checks proposed.
>>
>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
>> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
>> fails. For this reason I think the individual RBF should be bypassed and
>> only the package RBF apply ?
>>
>> Note this situation is plausible, with current LN design, your
>> counterparty can have a commitment transaction with a better fee just by
>> selecting a higher `dust_limit_satoshis` than yours.
>>
>> > Examples F and G [14] show the same package, but P1 is submitted
>> > individually before
>> > the package in example G. In example F, we can see that the 300vB
>> package
>> > pays
>> > an additional 200sat in fees, which is not enough to pay for its own
>> > bandwidth
>> > (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>> but
>> > using P1's fees again during package submission would make it look like
>> a
>> > 300sat
>> > increase for a 200vB package. Even including its fees and size would
>> not be
>> > sufficient in this example, since the 300sat looks like enough for the
>> 300vB
>> > package. The calculcation after deduplication is 100sat increase for a
>> > package
>> > of size 200vB, which correctly fails BIP125#4. Assume all transactions
>> have
>> > a
>> > size of 100vB.
>>
>> What problem are you trying to solve by the package feerate *after* dedup
>> rule ?
>>
>> My understanding is that an in-package transaction might be already in
>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>> vsize of this transaction could be discarded lowering the cost of package
>> RBF.
>>
>> If we keep a "safe" dedup mechanism (see my point above), I think this
>> discount is justified, as the validation cost of node operators is paid for
>> ?
>>
>> > The child cannot replace mempool transactions.
>>
>> Let's say you issue package A+B, then package C+B', where B' is a child
>> of both A and C. This rule fails the acceptance of C+B' ?
>>
>> I think this is a footgunish API, as if a package issuer send the
>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>> Then try to broadcast the higher-feerate C'+D' package, it should be
>> rejected. So it's breaking the naive broadcaster assumption that a
>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>> in protocols where states are symmetric. E.g a malicious counterparty
>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>> fees.
>>
>> > All mempool transactions to be replaced must signal replaceability.
>>
>> I think this is unsafe for L2s if counterparties have malleability of the
>> child transaction. They can block your package replacement by opting-out
>> from RBF signaling. IIRC, LN's "anchor output" presents such an ability.
>>
>> I think it's better to either fix inherited signaling or move towards
>> full-rbf.
>>
>> > if a package parent has already been submitted, it would
>> > look
>> >like the child is spending a "new" unconfirmed input.
>>
>> I think this is an issue brought by the trimming during the dedup phase.
>> If we preserve the package integrity, only re-using the tx-level checks
>> results of already in-mempool transactions to gain in CPU time we won't
>> have this issue. Package childs can add unconfirmed inputs as long as
>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>
>> > However, we still achieve the same goal of requiring the
>> > replacement
>> > transactions to have a ancestor score at least as high as the original
>> > ones.
>>
>> I'm not sure if this holds...
>>
>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
>> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
>> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
>> sat/vb for 1000 vbytes.
>>
>> Package A + B ancestor score is 10 sat/vb.
>>
>> D has a higher feerate/absolute fee than B.
>>
>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's 1000
>> sats + D's 1500 sats) /
>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>
>> Overall, this is a review through the lenses of LN requirements. I think
>> other L2 protocols/applications
>> could be candidates to using package accept/relay such as:
>> * https://github.com/lightninglabs/pool
>> * https://github.com/discreetlogcontracts/dlcspecs
>> * https://github.com/bitcoin-teleport/teleport-transactions/
>> * https://github.com/sapio-lang/sapio
>> * https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>> * https://github.com/revault/practical-revault
>>
>> Thanks for rolling forward the ball on this subject.
>>
>> Antoine
>>
>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>
>>> Hi there,
>>>
>>> I'm writing to propose a set of mempool policy changes to enable package
>>> validation (in preparation for package relay) in Bitcoin Core. These
>>> would not
>>> be consensus or P2P protocol changes. However, since mempool policy
>>> significantly affects transaction propagation, I believe this is
>>> relevant for
>>> the mailing list.
>>>
>>> My proposal enables packages consisting of multiple parents and 1 child.
>>> If you
>>> develop software that relies on specific transaction relay assumptions
>>> and/or
>>> are interested in using package relay in the future, I'm very interested
>>> to hear
>>> your feedback on the utility or restrictiveness of these package
>>> policies for
>>> your use cases.
>>>
>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>> PR#22290][1].
>>>
>>> An illustrated version of this post can be found at
>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>> I have also linked the images below.
>>>
>>> ## Background
>>>
>>> Feel free to skip this section if you are already familiar with mempool
>>> policy
>>> and package relay terminology.
>>>
>>> ### Terminology Clarifications
>>>
>>> * Package = an ordered list of related transactions, representable by a
>>> Directed
>>>   Acyclic Graph.
>>> * Package Feerate = the total modified fees divided by the total virtual
>>> size of
>>>   all transactions in the package.
>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>> the user
>>>       with `prioritisetransaction`. As such, we expect this to vary
>>> across
>>> mempools.
>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>> [BIP141
>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>> Core][3].
>>>     - Note that feerate is not necessarily based on the base fees and
>>> serialized
>>>       size.
>>>
>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>> incentives to
>>>   boost a transaction's candidacy for inclusion in a block, including
>>> Child Pays
>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in
>>> mempool policy is to recognize when the new transaction is more
>>> economical to
>>> mine than the original one(s) but not open DoS vectors, so there are some
>>> limitations.
>>>
>>> ### Policy
>>>
>>> The purpose of the mempool is to store the best (to be most
>>> incentive-compatible
>>> with miners, highest feerate) candidates for inclusion in a block.
>>> Miners use
>>> the mempool to build block templates. The mempool is also useful as a
>>> cache for
>>> boosting block relay and validation performance, aiding transaction
>>> relay, and
>>> generating feerate estimations.
>>>
>>> Ideally, all consensus-valid transactions paying reasonable fees should
>>> make it
>>> to miners through normal transaction relay, without any special
>>> connectivity or
>>> relationships with miners. On the other hand, nodes do not have unlimited
>>> resources, and a P2P network designed to let any honest node broadcast
>>> their
>>> transactions also exposes the transaction validation engine to DoS
>>> attacks from
>>> malicious peers.
>>>
>>> As such, for unconfirmed transactions we are considering for our
>>> mempool, we
>>> apply a set of validation rules in addition to consensus, primarily to
>>> protect
>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>> transactions. We call this mempool _policy_: a set of (configurable,
>>> node-specific) rules that transactions must abide by in order to be
>>> accepted
>>> into our mempool. Transaction "Standardness" rules and mempool
>>> restrictions such
>>> as "too-long-mempool-chain" are both examples of policy.
>>>
>>> ### Package Relay and Package Mempool Accept
>>>
>>> In transaction relay, we currently consider transactions one at a time
>>> for
>>> submission to the mempool. This creates a limitation in the node's
>>> ability to
>>> determine which transactions have the highest feerates, since we cannot
>>> take
>>> into account descendants (i.e. cannot use CPFP) until all the
>>> transactions are
>>> in the mempool. Similarly, we cannot use a transaction's descendants when
>>> considering it for RBF. When an individual transaction does not meet the
>>> mempool
>>> minimum feerate and the user isn't able to create a replacement
>>> transaction
>>> directly, it will not be accepted by mempools.
>>>
>>> This limitation presents a security issue for applications and users
>>> relying on
>>> time-sensitive transactions. For example, Lightning and other protocols
>>> create
>>> UTXOs with multiple spending paths, where one counterparty's spending
>>> path opens
>>> up after a timelock, and users are protected from cheating scenarios as
>>> long as
>>> they redeem on-chain in time. A key security assumption is that all
>>> parties'
>>> transactions will propagate and confirm in a timely manner. This
>>> assumption can
>>> be broken if fee-bumping does not work as intended.
>>>
>>> The end goal for Package Relay is to consider multiple transactions at
>>> the same
>>> time, e.g. a transaction with its high-fee child. This may help us better
>>> determine whether transactions should be accepted to our mempool,
>>> especially if
>>> they don't meet fee requirements individually or are better RBF
>>> candidates as a
>>> package. A combination of changes to mempool validation logic, policy,
>>> and
>>> transaction relay allows us to better propagate the transactions with the
>>> highest package feerates to miners, and makes fee-bumping tools more
>>> powerful
>>> for users.
>>>
>>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>>> large
>>> part of the changes are in the mempool's package validation logic. We
>>> call this
>>> *Package Mempool Accept*.
>>>
>>> ### Previous Work
>>>
>>> * Given that mempool validation is DoS-sensitive and complex, it would be
>>>   dangerous to haphazardly tack on package validation logic. Many
>>> efforts have
>>> been made to make mempool validation less opaque (see [#16400][4],
>>> [#21062][5],
>>> [#22675][6], [#22796][7]).
>>> * [#20833][8] Added basic capabilities for package validation, test
>>> accepts only
>>>   (no submission to mempool).
>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>> arbitrary
>>>   packages. Still test accepts only.
>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>
>>> ### Existing Package Rules
>>>
>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>> consider
>>> them as "given" in the rest of this document, though they can be
>>> changed, since
>>> package validation is test-accept only right now.
>>>
>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>
>>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>>> limits.
>>> Presumably, transactions in a package are all related, so exceeding this
>>> limit
>>> would mean that the package can either be split up or it wouldn't pass
>>> this
>>> mempool policy.
>>>
>>> 2. Packages must be topologically sorted: if any dependencies exist
>>> between
>>> transactions, parents must appear somewhere before children. [8]
>>>
>>> 3. A package cannot have conflicting transactions, i.e. none of them can
>>> spend
>>> the same inputs. This also means there cannot be duplicate transactions.
>>> [8]
>>>
>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>> test
>>> accept, the union of all of their descendants and ancestors is
>>> considered. This
>>> is essentially a "worst case" heuristic where every transaction in the
>>> package
>>> is treated as each other's ancestor and descendant. [8]
>>> Packages for which ancestor/descendant limits are accurately captured by
>>> this
>>> heuristic: [19]
>>>
>>> There are also limitations such as the fact that CPFP carve out is not
>>> applied
>>> to package transactions. #20833 also disables RBF in package validation;
>>> this
>>> proposal overrides that to allow packages to use RBF.
>>>
>>> ## Proposed Changes
>>>
>>> The next step in the Package Mempool Accept project is to implement
>>> submission
>>> to mempool, initially through RPC only. This allows us to test the
>>> submission
>>> logic before exposing it on P2P.
>>>
>>> ### Summary
>>>
>>> - Packages may contain already-in-mempool transactions.
>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>> - Fee-related checks use the package feerate. This means that wallets can
>>> create a package that utilizes CPFP.
>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>> similar
>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>
>>> There is a draft implementation in [#22290][1]. It is WIP, but feedback
>>> is
>>> always welcome.
>>>
>>> ### Details
>>>
>>> #### Packages May Contain Already-in-Mempool Transactions
>>>
>>> A package may contain transactions that are already in the mempool. We
>>> remove
>>> ("deduplicate") those transactions from the package for the purposes of
>>> package
>>> mempool acceptance. If a package is empty after deduplication, we do
>>> nothing.
>>>
>>> *Rationale*: Mempools vary across the network. It's possible for a
>>> parent to be
>>> accepted to the mempool of a peer on its own due to differences in
>>> policy and
>>> fee market fluctuations. We should not reject or penalize the entire
>>> package for
>>> an individual transaction as that could be a censorship vector.
>>>
>>> #### Packages Are Multi-Parent-1-Child
>>>
>>> Only packages of a specific topology are permitted. Namely, a package is
>>> exactly
>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>> package
>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>> unconfirmed parents, etc. Note that it's possible for the parents to be
>>> indirect
>>> descendants/ancestors of one another, or for parent and child to share a
>>> parent,
>>> so we cannot make any other topology assumptions.
>>>
>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>> parents
>>> makes it possible to fee-bump a batch of transactions. Restricting
>>> packages to a
>>> defined topology is also easier to reason about and simplifies the
>>> validation
>>> logic greatly. Multi-parent-1-child allows us to think of the package as
>>> one big
>>> transaction, where:
>>>
>>> - Inputs = all the inputs of parents + inputs of the child that come from
>>>   confirmed UTXOs
>>> - Outputs = all the outputs of the child + all outputs of the parents
>>> that
>>>   aren't spent by other transactions in the package
>>>
>>> Examples of packages that follow this rule (variations of example A show
>>> some
>>> possibilities after deduplication): ![image][15]
>>>
>>> #### Fee-Related Checks Use Package Feerate
>>>
>>> Package Feerate = the total modified fees divided by the total virtual
>>> size of
>>> all transactions in the package.
>>>
>>> To meet the two feerate requirements of a mempool, i.e., the
>>> pre-configured
>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>> feerate, the
>>> total package feerate is used instead of the individual feerate. The
>>> individual
>>> transactions are allowed to be below feerate requirements if the package
>>> meets
>>> the feerate requirements. For example, the parent(s) in the package can
>>> have 0
>>> fees but be paid for by the child.
>>>
>>> *Rationale*: This can be thought of as "CPFP within a package," solving
>>> the
>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>> applications to adjust their fees at broadcast time instead of
>>> overshooting or
>>> risking getting stuck/pinned.
>>>
>>> We use the package feerate of the package *after deduplication*.
>>>
>>> *Rationale*:  It would be incorrect to use the fees of transactions that
>>> are
>>> already in the mempool, as we do not want a transaction's fees to be
>>> double-counted for both its individual RBF and package RBF.
>>>
>>> Examples F and G [14] show the same package, but P1 is submitted
>>> individually before
>>> the package in example G. In example F, we can see that the 300vB
>>> package pays
>>> an additional 200sat in fees, which is not enough to pay for its own
>>> bandwidth
>>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>>> but
>>> using P1's fees again during package submission would make it look like
>>> a 300sat
>>> increase for a 200vB package. Even including its fees and size would not
>>> be
>>> sufficient in this example, since the 300sat looks like enough for the
>>> 300vB
>>> package. The calculcation after deduplication is 100sat increase for a
>>> package
>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>> have a
>>> size of 100vB.
>>>
>>> #### Package RBF
>>>
>>> If a package meets feerate requirements as a package, the parents in the
>>> transaction are allowed to replace-by-fee mempool transactions. The
>>> child cannot
>>> replace mempool transactions. Multiple transactions can replace the same
>>> transaction, but in order to be valid, none of the transactions can try
>>> to
>>> replace an ancestor of another transaction in the same package (which
>>> would thus
>>> make its inputs unavailable).
>>>
>>> *Rationale*: Even if we are using package feerate, a package will not
>>> propagate
>>> as intended if RBF still requires each individual transaction to meet the
>>> feerate requirements.
>>>
>>> We use a set of rules slightly modified from BIP125 as follows:
>>>
>>> ##### Signaling (Rule #1)
>>>
>>> All mempool transactions to be replaced must signal replaceability.
>>>
>>> *Rationale*: Package RBF signaling logic should be the same for package
>>> RBF and
>>> single transaction acceptance. This would be updated if single
>>> transaction
>>> validation moves to full RBF.
>>>
>>> ##### New Unconfirmed Inputs (Rule #2)
>>>
>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>> of the
>>> child must be at least as high as the ancestor feerates of every
>>> transaction
>>> being replaced. This is contrary to BIP125#2, which states "The
>>> replacement
>>> transaction may only include an unconfirmed input if that input was
>>> included in
>>> one of the original transactions. (An unconfirmed input spends an output
>>> from a
>>> currently-unconfirmed transaction.)"
>>>
>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>> transaction has a higher ancestor score than the original transaction(s)
>>> (see
>>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>>> can lower the
>>> ancestor score of the replacement transaction. P1 is trying to replace
>>> M1, and
>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>>> M2 pays
>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>> isolation, P1
>>> looks like a better mining candidate than M1, it must be mined with M2,
>>> so its
>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's ancestor
>>> feerate, which is 6sat/vB.
>>>
>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>> transactions in the package can spend new unconfirmed inputs." Example J
>>> [17] shows
>>> why, if any of the package transactions have ancestors, package feerate
>>> is no
>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which is
>>> the
>>> replacement transaction in an RBF), we're actually interested in the
>>> entire
>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>>> P2, and
>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>> only allow
>>> the child to have new unconfirmed inputs, either, because it can still
>>> cause us
>>> to overestimate the package's ancestor score.
>>>
>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>> Package RBF
>>> less useful, but would also break Package RBF for packages with parents
>>> already
>>> in the mempool: if a package parent has already been submitted, it would
>>> look
>>> like the child is spending a "new" unconfirmed input. In example K [18],
>>> we're
>>> looking to replace M1 with the entire package including P1, P2, and P3.
>>> We must
>>> consider the case where one of the parents is already in the mempool (in
>>> this
>>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>>> However,
>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>> replace M1
>>> with this package.
>>>
>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>> strict than
>>> BIP125#2. However, we still achieve the same goal of requiring the
>>> replacement
>>> transactions to have a ancestor score at least as high as the original
>>> ones. As
>>> a result, the entire package is required to be a higher feerate mining
>>> candidate
>>> than each of the replaced transactions.
>>>
>>> Another note: the [comment][13] above the BIP125#2 code in the original
>>> RBF
>>> implementation suggests that the rule was intended to be temporary.
>>>
>>> ##### Absolute Fee (Rule #3)
>>>
>>> The package must increase the absolute fee of the mempool, i.e. the
>>> total fees
>>> of the package must be higher than the absolute fees of the mempool
>>> transactions
>>> it replaces. Combined with the CPFP rule above, this differs from BIP125
>>> Rule #3
>>> - an individual transaction in the package may have lower fees than the
>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>> child
>>> pays for RBF.
>>>
>>> ##### Feerate (Rule #4)
>>>
>>> The package must pay for its own bandwidth; the package feerate must be
>>> higher
>>> than the replaced transactions by at least minimum relay feerate
>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this differs
>>> from
>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>> lower
>>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>>> fees,
>>> and the child pays for RBF.
>>>
>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>
>>> The package cannot replace more than 100 mempool transactions. This is
>>> identical
>>> to BIP125 Rule #5.
>>>
>>> ### Expected FAQs
>>>
>>> 1. Is it possible for only some of the package to make it into the
>>> mempool?
>>>
>>>    Yes, it is. However, since we evict transactions from the mempool by
>>> descendant score and the package child is supposed to be sponsoring the
>>> fees of
>>> its parents, the most common scenario would be all-or-nothing. This is
>>> incentive-compatible. In fact, to be conservative, package validation
>>> should
>>> begin by trying to submit all of the transactions individually, and only
>>> use the
>>> package mempool acceptance logic if the parents fail due to low feerate.
>>>
>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>
>>>     No, for practical reasons. In mempool validation, we actually aren't
>>> able to
>>> tell with 100% confidence if we are looking at a transaction that has
>>> already
>>> confirmed, because we look up inputs using a UTXO set. If we have
>>> historical
>>> block data, it's possible to look for it, but this is inefficient, not
>>> always
>>> possible for pruning nodes, and unnecessary because we're not going to do
>>> anything with the transaction anyway. As such, we already have the
>>> expectation
>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>> relaying
>>> transactions that have already been confirmed. Similarly, we shouldn't be
>>> relaying packages that contain already-confirmed transactions.
>>>
>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>> [2]:
>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>> [3]:
>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>> [13]:
>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>> [14]:
>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>> [15]:
>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>> [16]:
>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>> [17]:
>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>> [18]:
>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>> [19]:
>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>> [20]:
>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>

[-- Attachment #2: Type: text/html, Size: 53458 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-23  4:29     ` Antoine Riard
@ 2021-09-23 15:36       ` Gloria Zhao
  2021-09-26 21:10         ` Antoine Riard
  2021-10-14 10:48         ` darosior
  0 siblings, 2 replies; 16+ messages in thread
From: Gloria Zhao @ 2021-09-23 15:36 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 56703 bytes --]

Hi Antoine,

Thanks as always for your input. I'm glad we agree on so much!

In summary, it seems that the decisions that might still need
attention/input from devs on this mailing list are:
1. Whether we should start with multiple-parent-1-child or 1-parent-1-child.
2. Whether it's ok to require that the child not have conflicts with
mempool transactions.

Responding to your comments...

> IIUC, you have package A+B, during the dedup phase early in
`AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
and A' is higher feerate than A, you trim A and replace by A' ?

> I think this approach is safe, the one who appears unsafe to me is when
A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
In that case iirc that would be a pinning.

Right, the fact that we essentially always choose the first-seen witness is
an unfortunate limitation that exists already. Adding package mempool
accept doesn't worsen this, but the procedure in the future is to replace
the witness when it makes sense economically. We can also add logic to
allow package feerate to pay for witness replacements as well. This is
pretty far into the future, though.

> It sounds uneconomical for an attacker but I think it's not when you
consider than you can "batch" attack against multiple honest
counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
with Alice's honest package P1, B' conflicts with Bob's honest package P2,
C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
A' + B' + C'.

> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
confirmed by P1+P2+P3, I think it's lucrative for the attacker ?

I could be misunderstanding, but an attacker wouldn't be able to
batch-attack like this. Alice's package only conflicts with A' + D', not A'
+ B' + C' + D'. She only needs to pay for evicting 2 transactions.

> Do we assume that broadcasted packages are "honest" by default and that
the parent(s) always need the child to pass the fee checks, that way saving
the processing of individual transactions which are expected to fail in 99%
of cases or more ad hoc composition of packages at relay ?
> I think this point is quite dependent on the p2p packages format/logic
we'll end up on and that we should feel free to revisit it later ?

I think it's the opposite; there's no way for us to assume that p2p
packages will be "honest." I'd like to have two things before we expose on
P2P: (1) ensure that the amount of resources potentially allocated for
package validation isn't disproportionately higher than that of single
transaction validation and (2) only use package validation when we're
unsatisifed with the single validation result, e.g. we might get better
fees.
Yes, let's revisit this later :)

 > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
discard its feerate as B should pay for all fees checked on its own. Where
I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
have a fee high enough to cover the bandwidth penalty replacement
(`PaysForRBF`, 2nd check) of both A+B' or only B' ?

 B' only needs to pay for itself in this case.

> > Do we want the child to be able to replace mempool transactions as well?

> If we mean when you have replaceable A+B then A'+B' try to replace with a
higher-feerate ? I think that's exactly the case we need for Lightning as
A+B is coming from Alice and A'+B' is coming from Bob :/

Let me clarify this because I can see that my wording was ambiguous, and
then please let me know if it fits Lightning's needs?

In my proposal, I wrote "If a package meets feerate requirements as a
package, the parents in the transaction are allowed to replace-by-fee
mempool transactions. The child cannot replace mempool transactions." What
I meant was: the package can replace mempool transactions if any of the
parents conflict with mempool transactions. The child cannot not conflict
with any mempool transactions.
The Lightning use case this attempts to address is: Alice and Mallory are
LN counterparties, and have packages A+B and A'+B', respectively. A and A'
are their commitment transactions and conflict with each other; they have
shared inputs and different txids.
B spends Alice's anchor output from A. B' spends Mallory's anchor output
from A'. Thus, B and B' do not conflict with each other.
Alice can broadcast her package, A+B, to replace Mallory's package, A'+B',
since B doesn't conflict with the mempool.

Would this be ok?

> The second option, a child of A', In the LN case I think the CPFP is
attached on one's anchor output.

While it would be nice to have full RBF, malleability of the child won't
block RBF here. If we're trying to replace A', we only require that A'
signals replaceability, and don't mind if its child doesn't.

> > B has an ancestor score of 10sat/vb and D has an
> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
B's,
> > it fails the proposed package RBF Rule #2, so this package would be
> > rejected. Does this meet your expectations?

> Well what sounds odd to me, in my example, we fail D even if it has a
higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
fees are 4500 sats ?

Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A miner
should prefer to utilize their block space more effectively.

> Is this compatible with a model where a miner prioritizes absolute fees
over ancestor score, in the case that mempools aren't full-enough to
fulfill a block ?

No, because we don't use that model.

Thanks,
Gloria

On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail•com>
wrote:

> > Correct, if B+C is too low feerate to be accepted, we will reject it. I
> > prefer this because it is incentive compatible: A can be mined by itself,
> > so there's no reason to prefer A+B+C instead of A.
> > As another way of looking at this, consider the case where we do accept
> > A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
> > capacity, we evict the lowest descendant feerate transactions, which are
> > B+C in this case. This gives us the same resulting mempool, with A and
> not
> > B+C.
>
> I agree here. Doing otherwise, we might evict other transactions mempool
> in `MempoolAccept::Finalize` with a higher-feerate than B+C while those
> evicted transactions are the most compelling for block construction.
>
> I thought at first missing this acceptance requirement would break a
> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
> attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
> child fee is capturing the parent value. I can't think of other fee-bumping
> schemes potentially affected. If they do exist I would say they're wrong in
> their design assumptions.
>
> > If or when we have witness replacement, the logic is: if the individual
> > transaction is enough to replace the mempool one, the replacement will
> > happen during the preceding individual transaction acceptance, and
> > deduplication logic will work. Otherwise, we will try to deduplicate by
> > wtxid, see that we need a package witness replacement, and use the
> package
> > feerate to evaluate whether this is economically rational.
>
> IIUC, you have package A+B, during the dedup phase early in
> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
> and A' is higher feerate than A, you trim A and replace by A' ?
>
> I think this approach is safe, the one who appears unsafe to me is when A'
> has a _lower_ feerate, even if A' is already accepted by our mempool ? In
> that case iirc that would be a pinning.
>
> Good to see progress on witness replacement before we see usage of Taproot
> tree in the context of multi-party, where a malicious counterparty inflates
> its witness to jam a honest spending.
>
> (Note, the commit linked currently points nowhere :))
>
>
> > Please note that A may replace A' even if A' has higher fees than A
> > individually, because the proposed package RBF utilizes the fees and size
> > of the entire package. This just requires E to pay enough fees, although
> > this can be pretty high if there are also potential B' and C' competing
> > commitment transactions that we don't know about.
>
> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for the
> individual check on A, the honest package should replace the pinning
> attempt. I've not fully parsed the proposed implementation yet.
>
> Though note, I think it's still unsafe for a Lightning
> multi-commitment-broadcast-as-one-package as a malicious A' might have an
> absolute fee higher than E. It sounds uneconomical for
> an attacker but I think it's not when you consider than you can "batch"
> attack against multiple honest counterparties. E.g, Mallory broadcast A' +
> B' + C' + D' where A' conflicts with Alice's honest package P1, B'
> conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
> package P3. And D' is a high-fee child of A' + B' + C'.
>
> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
> confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>
> > So far, my understanding is that multi-parent-1-child is desired for
> > batched fee-bumping (
> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
> and
> > I've also seen your response which I have less context on (
> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
> That
> > being said, I am happy to create a new proposal for 1 parent + 1 child
> > (which would be slightly simpler) and plan for moving to
> > multi-parent-1-child later if that is preferred. I am very interested in
> > hearing feedback on that approach.
>
> I think batched fee-bumping is okay as long as you don't have
> time-sensitive outputs encumbering your commitment transactions. For the
> reasons mentioned above, I think that's unsafe.
>
> What I'm worried about is  L2 developers, potentially not aware about all
> the mempool subtleties blurring the difference and always batching their
> broadcast by default.
>
> IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
> constraint L2 design space for now and minimize risks of unsafe usage of
> the package API :)
>
> I think that's a point where it would be relevant to have the opinion of
> more L2 devs.
>
> > I think there is a misunderstanding here - let me describe what I'm
> > proposing we'd do in this situation: we'll try individual submission for
> A,
> > see that it fails due to "insufficient fees." Then, we'll try package
> > validation for A+B and use package RBF. If A+B pays enough, it can still
> > replace A'. If A fails for a bad signature, we won't look at B or A+B.
> Does
> > this meet your expectations?
>
> Yes there was a misunderstanding, I think this approach is correct, it's
> more a question of performance. Do we assume that broadcasted packages are
> "honest" by default and that the parent(s) always need the child to pass
> the fee checks, that way saving the processing of individual transactions
> which are expected to fail in 99% of cases or more ad hoc composition of
> packages at relay ?
>
> I think this point is quite dependent on the p2p packages format/logic
> we'll end up on and that we should feel free to revisit it later ?
>
>
> > What problem are you trying to solve by the package feerate *after* dedup
> rule ?
> > My understanding is that an in-package transaction might be already in
> the mempool. Therefore, to compute a correct RBF penalty replacement, the
> vsize of this transaction could be discarded lowering the cost of package
> RBF.
>
> > I'm proposing that, when a transaction has already been submitted to
> > mempool, we would ignore both its fees and vsize when calculating package
> > feerate.
>
> Yes, if you receive A+B, and A is already in-mempoo, I agree you can
> discard its feerate as B should pay for all fees checked on its own. Where
> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
> have a fee high enough to cover the bandwidth penalty replacement
> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>
> If you have a second-layer like current Lightning, you might have a
> counterparty commitment to replace and should always expect to have to pay
> for parent replacement bandwidth.
>
> Where a potential discount sounds interesting is when you have an univoque
> state on the first-stage of transactions. E.g DLC's funding transaction
> which might be CPFP by any participant iirc.
>
> > Note that, if C' conflicts with C, it also conflicts with D, since D is a
> > descendant of C and would thus need to be evicted along with it.
>
> Ah once again I think it's a misunderstanding without the code under my
> eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark
> for potential eviction D and don't consider it for future conflicts in the
> rest of the package, I think D' `PreChecks` should be good ?
>
> > More generally, this example is surprising to me because I didn't think
> > packages would be used to fee-bump replaceable transactions. Do we want
> the
> > child to be able to replace mempool transactions as well?
>
> If we mean when you have replaceable A+B then A'+B' try to replace with a
> higher-feerate ? I think that's exactly the case we need for Lightning as
> A+B is coming from Alice and A'+B' is coming from Bob :/
>
> > I'm not sure what you mean? Let's say we have a package of parent A +
> child
> > B, where A is supposed to replace a mempool transaction A'. Are you
> saying
> > that counterparties are able to malleate the package child B, or a child
> of
> > A'?
>
> The second option, a child of A', In the LN case I think the CPFP is
> attached on one's anchor output.
>
> I think it's good if we assume the
> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
> inherited signaling or full-rbf ?
>
> > Sorry, I don't understand what you mean by "preserve the package
> > integrity?" Could you elaborate?
>
> After thinking the relaxation about the "new" unconfirmed input is not
> linked to trimming but I would say more to the multi-parent support.
>
> Let's say you have A+B trying to replace C+D where B is also spending
> already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
> input as D isn't spending E.
>
> So good, I think we agree on the problem description here.
>
> > I am in agreement with your calculations but unsure if we disagree on the
> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
> > it fails the proposed package RBF Rule #2, so this package would be
> > rejected. Does this meet your expectations?
>
> Well what sounds odd to me, in my example, we fail D even if it has a
> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
> fees are 4500 sats ?
>
> Is this compatible with a model where a miner prioritizes absolute fees
> over ancestor score, in the case that mempools aren't full-enough to
> fulfill a block ?
>
> Let me know if I can clarify a point.
>
> Antoine
>
> Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a
> écrit :
>
>>
>> Hi Antoine,
>>
>> First of all, thank you for the thorough review. I appreciate your
>> insight on LN requirements.
>>
>> > IIUC, you have a package A+B+C submitted for acceptance and A is
>> already in your mempool. You trim out A from the package and then evaluate
>> B+C.
>>
>> > I think this might be an issue if A is the higher-fee element of the
>> ABC package. B+C package fees might be under the mempool min fee and will
>> be rejected, potentially breaking the acceptance expectations of the
>> package issuer ?
>>
>> Correct, if B+C is too low feerate to be accepted, we will reject it. I
>> prefer this because it is incentive compatible: A can be mined by itself,
>> so there's no reason to prefer A+B+C instead of A.
>> As another way of looking at this, consider the case where we do accept
>> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
>> capacity, we evict the lowest descendant feerate transactions, which are
>> B+C in this case. This gives us the same resulting mempool, with A and not
>> B+C.
>>
>>
>> > Further, I think the dedup should be done on wtxid, as you might have
>> multiple valid witnesses. Though with varying vsizes and as such offering
>> different feerates.
>>
>> I agree that variations of the same package with different witnesses is a
>> case that must be handled. I consider witness replacement to be a project
>> that can be done in parallel to package mempool acceptance because being
>> able to accept packages does not worsen the problem of a
>> same-txid-different-witness "pinning" attack.
>>
>> If or when we have witness replacement, the logic is: if the individual
>> transaction is enough to replace the mempool one, the replacement will
>> happen during the preceding individual transaction acceptance, and
>> deduplication logic will work. Otherwise, we will try to deduplicate by
>> wtxid, see that we need a package witness replacement, and use the package
>> feerate to evaluate whether this is economically rational.
>>
>> See the #22290 "handle package transactions already in mempool" commit (
>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
>> which handles the case of same-txid-different-witness by simply using the
>> transaction in the mempool for now, with TODOs for what I just described.
>>
>>
>> > I'm not clearly understanding the accepted topologies. By "parent and
>> child to share a parent", do you mean the set of transactions A, B, C,
>> where B is spending A and C is spending A and B would be correct ?
>>
>> Yes, that is what I meant. Yes, that would a valid package under these
>> rules.
>>
>> > If yes, is there a width-limit introduced or we fallback on
>> MAX_PACKAGE_COUNT=25 ?
>>
>> No, there is no limit on connectivity other than "child with all
>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
>> in-mempool + in-package ancestor limits.
>>
>>
>> > Considering the current Core's mempool acceptance rules, I think CPFP
>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>> jamming successful on one channel commitment transaction would contamine
>> the remaining commitments sharing the same package.
>>
>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>> transactions and E a shared CPFP. If a malicious A' transaction has a
>> better feerate than A, the whole package acceptance will fail. Even if A'
>> confirms in the following block,
>> the propagation and confirmation of B+C+D have been delayed. This could
>> carry on a loss of funds.
>>
>> Please note that A may replace A' even if A' has higher fees than A
>> individually, because the proposed package RBF utilizes the fees and size
>> of the entire package. This just requires E to pay enough fees, although
>> this can be pretty high if there are also potential B' and C' competing
>> commitment transactions that we don't know about.
>>
>>
>> > IMHO, I'm leaning towards deploying during a first phase
>> 1-parent/1-child. I think it's the most conservative step still improving
>> second-layer safety.
>>
>> So far, my understanding is that multi-parent-1-child is desired for
>> batched fee-bumping (
>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>> and I've also seen your response which I have less context on (
>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>> That being said, I am happy to create a new proposal for 1 parent + 1 child
>> (which would be slightly simpler) and plan for moving to
>> multi-parent-1-child later if that is preferred. I am very interested in
>> hearing feedback on that approach.
>>
>>
>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
>> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
>> fails. For this reason I think the individual RBF should be bypassed and
>> only the package RBF apply ?
>>
>> I think there is a misunderstanding here - let me describe what I'm
>> proposing we'd do in this situation: we'll try individual submission for A,
>> see that it fails due to "insufficient fees." Then, we'll try package
>> validation for A+B and use package RBF. If A+B pays enough, it can still
>> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
>> this meet your expectations?
>>
>>
>> > What problem are you trying to solve by the package feerate *after*
>> dedup rule ?
>> > My understanding is that an in-package transaction might be already in
>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>> vsize of this transaction could be discarded lowering the cost of package
>> RBF.
>>
>> I'm proposing that, when a transaction has already been submitted to
>> mempool, we would ignore both its fees and vsize when calculating package
>> feerate. In example G2, we shouldn't count M1 fees after its submission to
>> mempool, since M1's fees have already been used to pay for its individual
>> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
>> We also shouldn't count its vsize, since it has already been paid for.
>>
>>
>> > I think this is a footgunish API, as if a package issuer send the
>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>> Then try to broadcast the higher-feerate C'+D' package, it should be
>> rejected. So it's breaking the naive broadcaster assumption that a
>> higher-feerate/higher-fee package always replaces ?
>>
>> Note that, if C' conflicts with C, it also conflicts with D, since D is a
>> descendant of C and would thus need to be evicted along with it.
>> Implicitly, D' would not be in conflict with D.
>> More generally, this example is surprising to me because I didn't think
>> packages would be used to fee-bump replaceable transactions. Do we want the
>> child to be able to replace mempool transactions as well? This can be
>> implemented with a bit of additional logic.
>>
>> > I think this is unsafe for L2s if counterparties have malleability of
>> the child transaction. They can block your package replacement by
>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>> ability.
>>
>> I'm not sure what you mean? Let's say we have a package of parent A +
>> child B, where A is supposed to replace a mempool transaction A'. Are you
>> saying that counterparties are able to malleate the package child B, or a
>> child of A'? If they can malleate a child of A', that shouldn't matter as
>> long as A' is signaling replacement. This would be handled identically with
>> full RBF and what Core currently implements.
>>
>> > I think this is an issue brought by the trimming during the dedup
>> phase. If we preserve the package integrity, only re-using the tx-level
>> checks results of already in-mempool transactions to gain in CPU time we
>> won't have this issue. Package childs can add unconfirmed inputs as long as
>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>
>> Sorry, I don't understand what you mean by "preserve the package
>> integrity?" Could you elaborate?
>>
>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>> and C pays 1 sat/vb for 1000 vbytes.
>>
>> > Package A + B ancestor score is 10 sat/vb.
>>
>> > D has a higher feerate/absolute fee than B.
>>
>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>>
>> I am in agreement with your calculations but unsure if we disagree on the
>> expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
>> it fails the proposed package RBF Rule #2, so this package would be
>> rejected. Does this meet your expectations?
>>
>> Thank you for linking to projects that might be interested in package
>> relay :)
>>
>> Thanks,
>> Gloria
>>
>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail•com>
>> wrote:
>>
>>> Hi Gloria,
>>>
>>> > A package may contain transactions that are already in the mempool. We
>>> > remove
>>> > ("deduplicate") those transactions from the package for the purposes of
>>> > package
>>> > mempool acceptance. If a package is empty after deduplication, we do
>>> > nothing.
>>>
>>> IIUC, you have a package A+B+C submitted for acceptance and A is already
>>> in your mempool. You trim out A from the package and then evaluate B+C.
>>>
>>> I think this might be an issue if A is the higher-fee element of the ABC
>>> package. B+C package fees might be under the mempool min fee and will be
>>> rejected, potentially breaking the acceptance expectations of the package
>>> issuer ?
>>>
>>> Further, I think the dedup should be done on wtxid, as you might have
>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>> different feerates.
>>>
>>> E.g you're going to evaluate the package A+B and A' is already in your
>>> mempool with a bigger valid witness. You trim A based on txid, then you
>>> evaluate A'+B, which fails the fee checks. However, evaluating A+B would
>>> have been a success.
>>>
>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>>> achieve the same goal by bypassing tx-level checks for already-in txn while
>>> conserving the package integrity for package-level checks ?
>>>
>>> > Note that it's possible for the parents to be
>>> > indirect
>>> > descendants/ancestors of one another, or for parent and child to share
>>> a
>>> > parent,
>>> > so we cannot make any other topology assumptions.
>>>
>>> I'm not clearly understanding the accepted topologies. By "parent and
>>> child to share a parent", do you mean the set of transactions A, B, C,
>>> where B is spending A and C is spending A and B would be correct ?
>>>
>>> If yes, is there a width-limit introduced or we fallback on
>>> MAX_PACKAGE_COUNT=25 ?
>>>
>>> IIRC, one rationale to come with this topology limitation was to lower
>>> the DoS risks when potentially deploying p2p packages.
>>>
>>> Considering the current Core's mempool acceptance rules, I think CPFP
>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>> jamming successful on one channel commitment transaction would contamine
>>> the remaining commitments sharing the same package.
>>>
>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>> confirms in the following block,
>>> the propagation and confirmation of B+C+D have been delayed. This could
>>> carry on a loss of funds.
>>>
>>> That said, if you're broadcasting commitment transactions without
>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>> saving as you don't have to duplicate the CPFP.
>>>
>>> IMHO, I'm leaning towards deploying during a first phase
>>> 1-parent/1-child. I think it's the most conservative step still improving
>>> second-layer safety.
>>>
>>> > *Rationale*:  It would be incorrect to use the fees of transactions
>>> that are
>>> > already in the mempool, as we do not want a transaction's fees to be
>>> > double-counted for both its individual RBF and package RBF.
>>>
>>> I'm unsure about the logical order of the checks proposed.
>>>
>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
>>> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
>>> fails. For this reason I think the individual RBF should be bypassed and
>>> only the package RBF apply ?
>>>
>>> Note this situation is plausible, with current LN design, your
>>> counterparty can have a commitment transaction with a better fee just by
>>> selecting a higher `dust_limit_satoshis` than yours.
>>>
>>> > Examples F and G [14] show the same package, but P1 is submitted
>>> > individually before
>>> > the package in example G. In example F, we can see that the 300vB
>>> package
>>> > pays
>>> > an additional 200sat in fees, which is not enough to pay for its own
>>> > bandwidth
>>> > (BIP125#4). In example G, we can see that P1 pays enough to replace
>>> M1, but
>>> > using P1's fees again during package submission would make it look
>>> like a
>>> > 300sat
>>> > increase for a 200vB package. Even including its fees and size would
>>> not be
>>> > sufficient in this example, since the 300sat looks like enough for the
>>> 300vB
>>> > package. The calculcation after deduplication is 100sat increase for a
>>> > package
>>> > of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>> have
>>> > a
>>> > size of 100vB.
>>>
>>> What problem are you trying to solve by the package feerate *after*
>>> dedup rule ?
>>>
>>> My understanding is that an in-package transaction might be already in
>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>> vsize of this transaction could be discarded lowering the cost of package
>>> RBF.
>>>
>>> If we keep a "safe" dedup mechanism (see my point above), I think this
>>> discount is justified, as the validation cost of node operators is paid for
>>> ?
>>>
>>> > The child cannot replace mempool transactions.
>>>
>>> Let's say you issue package A+B, then package C+B', where B' is a child
>>> of both A and C. This rule fails the acceptance of C+B' ?
>>>
>>> I think this is a footgunish API, as if a package issuer send the
>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>> rejected. So it's breaking the naive broadcaster assumption that a
>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>> in protocols where states are symmetric. E.g a malicious counterparty
>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>> fees.
>>>
>>> > All mempool transactions to be replaced must signal replaceability.
>>>
>>> I think this is unsafe for L2s if counterparties have malleability of
>>> the child transaction. They can block your package replacement by
>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>> ability.
>>>
>>> I think it's better to either fix inherited signaling or move towards
>>> full-rbf.
>>>
>>> > if a package parent has already been submitted, it would
>>> > look
>>> >like the child is spending a "new" unconfirmed input.
>>>
>>> I think this is an issue brought by the trimming during the dedup phase.
>>> If we preserve the package integrity, only re-using the tx-level checks
>>> results of already in-mempool transactions to gain in CPU time we won't
>>> have this issue. Package childs can add unconfirmed inputs as long as
>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>
>>> > However, we still achieve the same goal of requiring the
>>> > replacement
>>> > transactions to have a ancestor score at least as high as the original
>>> > ones.
>>>
>>> I'm not sure if this holds...
>>>
>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100 vbytes
>>> and B pays 10 sat/vb for 100 vbytes. You have the candidate replacement D
>>> spending both A and C where D pays 15sat/vb for 100 vbytes and C pays 1
>>> sat/vb for 1000 vbytes.
>>>
>>> Package A + B ancestor score is 10 sat/vb.
>>>
>>> D has a higher feerate/absolute fee than B.
>>>
>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>> 1000 sats + D's 1500 sats) /
>>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>>
>>> Overall, this is a review through the lenses of LN requirements. I think
>>> other L2 protocols/applications
>>> could be candidates to using package accept/relay such as:
>>> * https://github.com/lightninglabs/pool
>>> * https://github.com/discreetlogcontracts/dlcspecs
>>> * https://github.com/bitcoin-teleport/teleport-transactions/
>>> * https://github.com/sapio-lang/sapio
>>> *
>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>>> * https://github.com/revault/practical-revault
>>>
>>> Thanks for rolling forward the ball on this subject.
>>>
>>> Antoine
>>>
>>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>
>>>> Hi there,
>>>>
>>>> I'm writing to propose a set of mempool policy changes to enable package
>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>> would not
>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>> significantly affects transaction propagation, I believe this is
>>>> relevant for
>>>> the mailing list.
>>>>
>>>> My proposal enables packages consisting of multiple parents and 1
>>>> child. If you
>>>> develop software that relies on specific transaction relay assumptions
>>>> and/or
>>>> are interested in using package relay in the future, I'm very
>>>> interested to hear
>>>> your feedback on the utility or restrictiveness of these package
>>>> policies for
>>>> your use cases.
>>>>
>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>> PR#22290][1].
>>>>
>>>> An illustrated version of this post can be found at
>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>> I have also linked the images below.
>>>>
>>>> ## Background
>>>>
>>>> Feel free to skip this section if you are already familiar with mempool
>>>> policy
>>>> and package relay terminology.
>>>>
>>>> ### Terminology Clarifications
>>>>
>>>> * Package = an ordered list of related transactions, representable by a
>>>> Directed
>>>>   Acyclic Graph.
>>>> * Package Feerate = the total modified fees divided by the total
>>>> virtual size of
>>>>   all transactions in the package.
>>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>>> the user
>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>> across
>>>> mempools.
>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>> [BIP141
>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>> Core][3].
>>>>     - Note that feerate is not necessarily based on the base fees and
>>>> serialized
>>>>       size.
>>>>
>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>> incentives to
>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>> Child Pays
>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention
>>>> in
>>>> mempool policy is to recognize when the new transaction is more
>>>> economical to
>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>> some
>>>> limitations.
>>>>
>>>> ### Policy
>>>>
>>>> The purpose of the mempool is to store the best (to be most
>>>> incentive-compatible
>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>> Miners use
>>>> the mempool to build block templates. The mempool is also useful as a
>>>> cache for
>>>> boosting block relay and validation performance, aiding transaction
>>>> relay, and
>>>> generating feerate estimations.
>>>>
>>>> Ideally, all consensus-valid transactions paying reasonable fees should
>>>> make it
>>>> to miners through normal transaction relay, without any special
>>>> connectivity or
>>>> relationships with miners. On the other hand, nodes do not have
>>>> unlimited
>>>> resources, and a P2P network designed to let any honest node broadcast
>>>> their
>>>> transactions also exposes the transaction validation engine to DoS
>>>> attacks from
>>>> malicious peers.
>>>>
>>>> As such, for unconfirmed transactions we are considering for our
>>>> mempool, we
>>>> apply a set of validation rules in addition to consensus, primarily to
>>>> protect
>>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>> node-specific) rules that transactions must abide by in order to be
>>>> accepted
>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>> restrictions such
>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>
>>>> ### Package Relay and Package Mempool Accept
>>>>
>>>> In transaction relay, we currently consider transactions one at a time
>>>> for
>>>> submission to the mempool. This creates a limitation in the node's
>>>> ability to
>>>> determine which transactions have the highest feerates, since we cannot
>>>> take
>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>> transactions are
>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>> when
>>>> considering it for RBF. When an individual transaction does not meet
>>>> the mempool
>>>> minimum feerate and the user isn't able to create a replacement
>>>> transaction
>>>> directly, it will not be accepted by mempools.
>>>>
>>>> This limitation presents a security issue for applications and users
>>>> relying on
>>>> time-sensitive transactions. For example, Lightning and other protocols
>>>> create
>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>> path opens
>>>> up after a timelock, and users are protected from cheating scenarios as
>>>> long as
>>>> they redeem on-chain in time. A key security assumption is that all
>>>> parties'
>>>> transactions will propagate and confirm in a timely manner. This
>>>> assumption can
>>>> be broken if fee-bumping does not work as intended.
>>>>
>>>> The end goal for Package Relay is to consider multiple transactions at
>>>> the same
>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>> better
>>>> determine whether transactions should be accepted to our mempool,
>>>> especially if
>>>> they don't meet fee requirements individually or are better RBF
>>>> candidates as a
>>>> package. A combination of changes to mempool validation logic, policy,
>>>> and
>>>> transaction relay allows us to better propagate the transactions with
>>>> the
>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>> powerful
>>>> for users.
>>>>
>>>> The "relay" part of Package Relay suggests P2P messaging changes, but a
>>>> large
>>>> part of the changes are in the mempool's package validation logic. We
>>>> call this
>>>> *Package Mempool Accept*.
>>>>
>>>> ### Previous Work
>>>>
>>>> * Given that mempool validation is DoS-sensitive and complex, it would
>>>> be
>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>> efforts have
>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>> [#21062][5],
>>>> [#22675][6], [#22796][7]).
>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>> accepts only
>>>>   (no submission to mempool).
>>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>>> arbitrary
>>>>   packages. Still test accepts only.
>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>
>>>> ### Existing Package Rules
>>>>
>>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>>> consider
>>>> them as "given" in the rest of this document, though they can be
>>>> changed, since
>>>> package validation is test-accept only right now.
>>>>
>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>
>>>>    *Rationale*: This is already enforced as mempool ancestor/descendant
>>>> limits.
>>>> Presumably, transactions in a package are all related, so exceeding
>>>> this limit
>>>> would mean that the package can either be split up or it wouldn't pass
>>>> this
>>>> mempool policy.
>>>>
>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>> between
>>>> transactions, parents must appear somewhere before children. [8]
>>>>
>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>> can spend
>>>> the same inputs. This also means there cannot be duplicate
>>>> transactions. [8]
>>>>
>>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>>> test
>>>> accept, the union of all of their descendants and ancestors is
>>>> considered. This
>>>> is essentially a "worst case" heuristic where every transaction in the
>>>> package
>>>> is treated as each other's ancestor and descendant. [8]
>>>> Packages for which ancestor/descendant limits are accurately captured
>>>> by this
>>>> heuristic: [19]
>>>>
>>>> There are also limitations such as the fact that CPFP carve out is not
>>>> applied
>>>> to package transactions. #20833 also disables RBF in package
>>>> validation; this
>>>> proposal overrides that to allow packages to use RBF.
>>>>
>>>> ## Proposed Changes
>>>>
>>>> The next step in the Package Mempool Accept project is to implement
>>>> submission
>>>> to mempool, initially through RPC only. This allows us to test the
>>>> submission
>>>> logic before exposing it on P2P.
>>>>
>>>> ### Summary
>>>>
>>>> - Packages may contain already-in-mempool transactions.
>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>> - Fee-related checks use the package feerate. This means that wallets
>>>> can
>>>> create a package that utilizes CPFP.
>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>> similar
>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>
>>>> There is a draft implementation in [#22290][1]. It is WIP, but feedback
>>>> is
>>>> always welcome.
>>>>
>>>> ### Details
>>>>
>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>
>>>> A package may contain transactions that are already in the mempool. We
>>>> remove
>>>> ("deduplicate") those transactions from the package for the purposes of
>>>> package
>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>> nothing.
>>>>
>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>> parent to be
>>>> accepted to the mempool of a peer on its own due to differences in
>>>> policy and
>>>> fee market fluctuations. We should not reject or penalize the entire
>>>> package for
>>>> an individual transaction as that could be a censorship vector.
>>>>
>>>> #### Packages Are Multi-Parent-1-Child
>>>>
>>>> Only packages of a specific topology are permitted. Namely, a package
>>>> is exactly
>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>> package
>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>> unconfirmed parents, etc. Note that it's possible for the parents to be
>>>> indirect
>>>> descendants/ancestors of one another, or for parent and child to share
>>>> a parent,
>>>> so we cannot make any other topology assumptions.
>>>>
>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>> parents
>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>> packages to a
>>>> defined topology is also easier to reason about and simplifies the
>>>> validation
>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>> as one big
>>>> transaction, where:
>>>>
>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>> from
>>>>   confirmed UTXOs
>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>> that
>>>>   aren't spent by other transactions in the package
>>>>
>>>> Examples of packages that follow this rule (variations of example A
>>>> show some
>>>> possibilities after deduplication): ![image][15]
>>>>
>>>> #### Fee-Related Checks Use Package Feerate
>>>>
>>>> Package Feerate = the total modified fees divided by the total virtual
>>>> size of
>>>> all transactions in the package.
>>>>
>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>> pre-configured
>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>> feerate, the
>>>> total package feerate is used instead of the individual feerate. The
>>>> individual
>>>> transactions are allowed to be below feerate requirements if the
>>>> package meets
>>>> the feerate requirements. For example, the parent(s) in the package can
>>>> have 0
>>>> fees but be paid for by the child.
>>>>
>>>> *Rationale*: This can be thought of as "CPFP within a package," solving
>>>> the
>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>> applications to adjust their fees at broadcast time instead of
>>>> overshooting or
>>>> risking getting stuck/pinned.
>>>>
>>>> We use the package feerate of the package *after deduplication*.
>>>>
>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>> that are
>>>> already in the mempool, as we do not want a transaction's fees to be
>>>> double-counted for both its individual RBF and package RBF.
>>>>
>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>> individually before
>>>> the package in example G. In example F, we can see that the 300vB
>>>> package pays
>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>> bandwidth
>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
>>>> but
>>>> using P1's fees again during package submission would make it look like
>>>> a 300sat
>>>> increase for a 200vB package. Even including its fees and size would
>>>> not be
>>>> sufficient in this example, since the 300sat looks like enough for the
>>>> 300vB
>>>> package. The calculcation after deduplication is 100sat increase for a
>>>> package
>>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>>> have a
>>>> size of 100vB.
>>>>
>>>> #### Package RBF
>>>>
>>>> If a package meets feerate requirements as a package, the parents in the
>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>> child cannot
>>>> replace mempool transactions. Multiple transactions can replace the same
>>>> transaction, but in order to be valid, none of the transactions can try
>>>> to
>>>> replace an ancestor of another transaction in the same package (which
>>>> would thus
>>>> make its inputs unavailable).
>>>>
>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>> propagate
>>>> as intended if RBF still requires each individual transaction to meet
>>>> the
>>>> feerate requirements.
>>>>
>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>
>>>> ##### Signaling (Rule #1)
>>>>
>>>> All mempool transactions to be replaced must signal replaceability.
>>>>
>>>> *Rationale*: Package RBF signaling logic should be the same for package
>>>> RBF and
>>>> single transaction acceptance. This would be updated if single
>>>> transaction
>>>> validation moves to full RBF.
>>>>
>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>
>>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>>> of the
>>>> child must be at least as high as the ancestor feerates of every
>>>> transaction
>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>> replacement
>>>> transaction may only include an unconfirmed input if that input was
>>>> included in
>>>> one of the original transactions. (An unconfirmed input spends an
>>>> output from a
>>>> currently-unconfirmed transaction.)"
>>>>
>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>> transaction has a higher ancestor score than the original
>>>> transaction(s) (see
>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed input
>>>> can lower the
>>>> ancestor score of the replacement transaction. P1 is trying to replace
>>>> M1, and
>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat, and
>>>> M2 pays
>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>> isolation, P1
>>>> looks like a better mining candidate than M1, it must be mined with M2,
>>>> so its
>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>> ancestor
>>>> feerate, which is 6sat/vB.
>>>>
>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>> transactions in the package can spend new unconfirmed inputs." Example
>>>> J [17] shows
>>>> why, if any of the package transactions have ancestors, package feerate
>>>> is no
>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>> is the
>>>> replacement transaction in an RBF), we're actually interested in the
>>>> entire
>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3, P1,
>>>> P2, and
>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>> only allow
>>>> the child to have new unconfirmed inputs, either, because it can still
>>>> cause us
>>>> to overestimate the package's ancestor score.
>>>>
>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>> Package RBF
>>>> less useful, but would also break Package RBF for packages with parents
>>>> already
>>>> in the mempool: if a package parent has already been submitted, it
>>>> would look
>>>> like the child is spending a "new" unconfirmed input. In example K
>>>> [18], we're
>>>> looking to replace M1 with the entire package including P1, P2, and P3.
>>>> We must
>>>> consider the case where one of the parents is already in the mempool
>>>> (in this
>>>> case, P2), which means we must allow P3 to have new unconfirmed inputs.
>>>> However,
>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>> replace M1
>>>> with this package.
>>>>
>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>> strict than
>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>> replacement
>>>> transactions to have a ancestor score at least as high as the original
>>>> ones. As
>>>> a result, the entire package is required to be a higher feerate mining
>>>> candidate
>>>> than each of the replaced transactions.
>>>>
>>>> Another note: the [comment][13] above the BIP125#2 code in the original
>>>> RBF
>>>> implementation suggests that the rule was intended to be temporary.
>>>>
>>>> ##### Absolute Fee (Rule #3)
>>>>
>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>> total fees
>>>> of the package must be higher than the absolute fees of the mempool
>>>> transactions
>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>> BIP125 Rule #3
>>>> - an individual transaction in the package may have lower fees than the
>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>>> child
>>>> pays for RBF.
>>>>
>>>> ##### Feerate (Rule #4)
>>>>
>>>> The package must pay for its own bandwidth; the package feerate must be
>>>> higher
>>>> than the replaced transactions by at least minimum relay feerate
>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>> differs from
>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>> lower
>>>> feerate than the transaction(s) it is replacing. In fact, it may have 0
>>>> fees,
>>>> and the child pays for RBF.
>>>>
>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>
>>>> The package cannot replace more than 100 mempool transactions. This is
>>>> identical
>>>> to BIP125 Rule #5.
>>>>
>>>> ### Expected FAQs
>>>>
>>>> 1. Is it possible for only some of the package to make it into the
>>>> mempool?
>>>>
>>>>    Yes, it is. However, since we evict transactions from the mempool by
>>>> descendant score and the package child is supposed to be sponsoring the
>>>> fees of
>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>> incentive-compatible. In fact, to be conservative, package validation
>>>> should
>>>> begin by trying to submit all of the transactions individually, and
>>>> only use the
>>>> package mempool acceptance logic if the parents fail due to low feerate.
>>>>
>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>
>>>>     No, for practical reasons. In mempool validation, we actually
>>>> aren't able to
>>>> tell with 100% confidence if we are looking at a transaction that has
>>>> already
>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>> historical
>>>> block data, it's possible to look for it, but this is inefficient, not
>>>> always
>>>> possible for pruning nodes, and unnecessary because we're not going to
>>>> do
>>>> anything with the transaction anyway. As such, we already have the
>>>> expectation
>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>> relaying
>>>> transactions that have already been confirmed. Similarly, we shouldn't
>>>> be
>>>> relaying packages that contain already-confirmed transactions.
>>>>
>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>> [2]:
>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>> [3]:
>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>> [13]:
>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>> [14]:
>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>> [15]:
>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>> [16]:
>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>> [17]:
>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>> [18]:
>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>> [19]:
>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>> [20]:
>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>
>>>

[-- Attachment #2: Type: text/html, Size: 60138 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-23 15:36       ` Gloria Zhao
@ 2021-09-26 21:10         ` Antoine Riard
  2021-09-27  7:15           ` Bastien TEINTURIER
  2021-10-14 10:48         ` darosior
  1 sibling, 1 reply; 16+ messages in thread
From: Antoine Riard @ 2021-09-26 21:10 UTC (permalink / raw)
  To: Gloria Zhao; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 65234 bytes --]

Hi Gloria,

Thanks for your answers,

> In summary, it seems that the decisions that might still need
> attention/input from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or
1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with
> mempool transactions.

Yes 1) it would be good to have inputs of more potential users of package
acceptance . And 2) I think it's more a matter of clearer wording of the
proposal.

However, see my final point on the relaxation around "unconfirmed inputs"
which might in fact alter our current block construction strategy.

> Right, the fact that we essentially always choose the first-seen witness
is
> an unfortunate limitation that exists already. Adding package mempool
> accept doesn't worsen this, but the procedure in the future is to replace
> the witness when it makes sense economically. We can also add logic to
> allow package feerate to pay for witness replacements as well. This is
> pretty far into the future, though.

Yes I agree package mempool doesn't worsen this. And it's not an issue for
current LN as you can't significantly inflate a spending witness for the
2-of-2 funding output.
However, it might be an issue for multi-party protocol where the spending
script has alternative branches with asymmetric valid witness weights.
Taproot should ease that kind of script so hopefully we would deploy
wtxid-replacement not too far in the future.

> I could be misunderstanding, but an attacker wouldn't be able to
> batch-attack like this. Alice's package only conflicts with A' + D', not
A'
> + B' + C' + D'. She only needs to pay for evicting 2 transactions.

Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
consider.

In LN, if you're trying to confirm a commitment transaction to time-out or
claim on-chain a HTLC and the timelock is near-expiration, you should be
ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
value offered by the HTLC.

Following this security assumption, an attacker can exploit it by targeting
together commitment transactions from different channels by blocking them
under a high-fee child, of which the fee value
is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
from observing mempools state, victims can't learn they're targeted by the
same attacker.

To draw from the aforementioned topology, Mallory broadcasts A' + B' + C' +
D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
at loss on stealing P1, as she has paid more in fees but will realize a
gain on P2+P3.

In this model, Alice is allowed to evict those 2 transactions (A' + D') but
as she is economically-bounded she won't succeed.

Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
1st pinning scenario is correct and "lucractive" when you sum the global
gain/loss.

There is a 2nd attack scenario where A + B + C + D, where D is the child of
A,B,C. All those transactions are honestly issued by Alice. Once A + B + C
+ D are propagated in network mempools, Mallory is able to replace A + D
with  A' + D' where D' is paying a higher fee. This package A' + D' will
confirm soon if D feerate was compelling but Mallory succeeds in delaying
the confirmation
of B + C for one or more blocks. As B + C are pre-signed commitments with a
low-fee rate they won't confirm without Alice issuing a new child E.
Mallory can repeat the same trick by broadcasting
B' + E' and delay again the confirmation of C.

If the remaining package pending HTLC has a higher-value than all the
malicious fees over-bid, Mallory should realize a gain. With this 2nd
pinning attack, the malicious entity buys confirmation delay of your
packaged-together commitments.

Assuming those attacks are correct, I'm leaning towards being conservative
with the LDK broadcast backend. Though once again, other L2 devs have
likely other use-cases and opinions :)

>  B' only needs to pay for itself in this case.

Yes I think it's a nice discount when UTXO is single-owned. In the context
of shared-owned UTXO (e.g LN), you might not if there is an in-mempool
package already spending the UTXO and have to assume the worst-case
scenario. I.e have B' committing enough fee to pay for A' replacement
bandwidth. I think we can't do that much for this case...

> If a package meets feerate requirements as a
package, the parents in the transaction are allowed to replace-by-fee
mempool transactions. The child cannot replace mempool transactions."

I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B' to
replace A+B because the first broadcast isn't satisfying anymore due to
mempool spikes ? Assuming B' fees is enough, I think that case as child B'
replacing in-mempool transaction B. Which I understand going against  "The
child cannot replace mempool transactions".

Maybe wording could be a bit clearer ?

> While it would be nice to have full RBF, malleability of the child won't
> block RBF here. If we're trying to replace A', we only require that A'
> signals replaceability, and don't mind if its child doesn't.

Yes, it sounds good.

> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
miner
> should prefer to utilize their block space more effectively.

If your mempool is empty and only composed of A+C+D or A+B, I think taking
A+C+D is the most efficient block construction you can come up with as a
miner ?

> No, because we don't use that model.

Can you describe what miner model we are using ? Like the block
construction strategy implemented by `addPackagesTxs` or also encompassing
our current mempool acceptance policy, which I think rely on absolute fee
over ancestor score in case of replacement ?

I think this point is worthy to discuss as otherwise we might downgrade the
efficiency of our current block construction strategy in periods of
near-empty mempools. A knowledge which could be discreetly leveraged by a
miner to gain an advantage on the rest of the mining ecosystem.

Note, I think we *might* have to go in this direction if we want to replace
replace-by-fee by replace-by-feerate or replace-by-ancestor and solve
in-depth pinning attacks. Though if we do so,
IMO we would need more thoughts.

I think we could restrain package acceptance to only confirmed inputs for
now and revisit later this point ? For LN-anchor, you can assume that the
fee-bumping UTXO feeding the CPFP is already
confirmed. Or are there currently-deployed use-cases which would benefit
from your proposed Rule #2 ?

Antoine

Le jeu. 23 sept. 2021 à 11:36, Gloria Zhao <gloriajzhao@gmail•com> a écrit :

> Hi Antoine,
>
> Thanks as always for your input. I'm glad we agree on so much!
>
> In summary, it seems that the decisions that might still need
> attention/input from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or
> 1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with
> mempool transactions.
>
> Responding to your comments...
>
> > IIUC, you have package A+B, during the dedup phase early in
> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
> and A' is higher feerate than A, you trim A and replace by A' ?
>
> > I think this approach is safe, the one who appears unsafe to me is when
> A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
> In that case iirc that would be a pinning.
>
> Right, the fact that we essentially always choose the first-seen witness
> is an unfortunate limitation that exists already. Adding package mempool
> accept doesn't worsen this, but the procedure in the future is to replace
> the witness when it makes sense economically. We can also add logic to
> allow package feerate to pay for witness replacements as well. This is
> pretty far into the future, though.
>
> > It sounds uneconomical for an attacker but I think it's not when you
> consider than you can "batch" attack against multiple honest
> counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
> with Alice's honest package P1, B' conflicts with Bob's honest package P2,
> C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
> A' + B' + C'.
>
> > If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
> confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>
> I could be misunderstanding, but an attacker wouldn't be able to
> batch-attack like this. Alice's package only conflicts with A' + D', not A'
> + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>
> > Do we assume that broadcasted packages are "honest" by default and that
> the parent(s) always need the child to pass the fee checks, that way saving
> the processing of individual transactions which are expected to fail in 99%
> of cases or more ad hoc composition of packages at relay ?
> > I think this point is quite dependent on the p2p packages format/logic
> we'll end up on and that we should feel free to revisit it later ?
>
> I think it's the opposite; there's no way for us to assume that p2p
> packages will be "honest." I'd like to have two things before we expose on
> P2P: (1) ensure that the amount of resources potentially allocated for
> package validation isn't disproportionately higher than that of single
> transaction validation and (2) only use package validation when we're
> unsatisifed with the single validation result, e.g. we might get better
> fees.
> Yes, let's revisit this later :)
>
>  > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
> discard its feerate as B should pay for all fees checked on its own. Where
> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
> have a fee high enough to cover the bandwidth penalty replacement
> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>
>  B' only needs to pay for itself in this case.
>
> > > Do we want the child to be able to replace mempool transactions as
> well?
>
> > If we mean when you have replaceable A+B then A'+B' try to replace with
> a higher-feerate ? I think that's exactly the case we need for Lightning as
> A+B is coming from Alice and A'+B' is coming from Bob :/
>
> Let me clarify this because I can see that my wording was ambiguous, and
> then please let me know if it fits Lightning's needs?
>
> In my proposal, I wrote "If a package meets feerate requirements as a
> package, the parents in the transaction are allowed to replace-by-fee
> mempool transactions. The child cannot replace mempool transactions." What
> I meant was: the package can replace mempool transactions if any of the
> parents conflict with mempool transactions. The child cannot not conflict
> with any mempool transactions.
> The Lightning use case this attempts to address is: Alice and Mallory are
> LN counterparties, and have packages A+B and A'+B', respectively. A and A'
> are their commitment transactions and conflict with each other; they have
> shared inputs and different txids.
> B spends Alice's anchor output from A. B' spends Mallory's anchor output
> from A'. Thus, B and B' do not conflict with each other.
> Alice can broadcast her package, A+B, to replace Mallory's package, A'+B',
> since B doesn't conflict with the mempool.
>
> Would this be ok?
>
> > The second option, a child of A', In the LN case I think the CPFP is
> attached on one's anchor output.
>
> While it would be nice to have full RBF, malleability of the child won't
> block RBF here. If we're trying to replace A', we only require that A'
> signals replaceability, and don't mind if its child doesn't.
>
> > > B has an ancestor score of 10sat/vb and D has an
> > > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
> B's,
> > > it fails the proposed package RBF Rule #2, so this package would be
> > > rejected. Does this meet your expectations?
>
> > Well what sounds odd to me, in my example, we fail D even if it has a
> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
> fees are 4500 sats ?
>
> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
> miner should prefer to utilize their block space more effectively.
>
> > Is this compatible with a model where a miner prioritizes absolute fees
> over ancestor score, in the case that mempools aren't full-enough to
> fulfill a block ?
>
> No, because we don't use that model.
>
> Thanks,
> Gloria
>
> On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail•com>
> wrote:
>
>> > Correct, if B+C is too low feerate to be accepted, we will reject it. I
>> > prefer this because it is incentive compatible: A can be mined by
>> itself,
>> > so there's no reason to prefer A+B+C instead of A.
>> > As another way of looking at this, consider the case where we do accept
>> > A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
>> > capacity, we evict the lowest descendant feerate transactions, which are
>> > B+C in this case. This gives us the same resulting mempool, with A and
>> not
>> > B+C.
>>
>> I agree here. Doing otherwise, we might evict other transactions mempool
>> in `MempoolAccept::Finalize` with a higher-feerate than B+C while those
>> evicted transactions are the most compelling for block construction.
>>
>> I thought at first missing this acceptance requirement would break a
>> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
>> attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
>> child fee is capturing the parent value. I can't think of other fee-bumping
>> schemes potentially affected. If they do exist I would say they're wrong in
>> their design assumptions.
>>
>> > If or when we have witness replacement, the logic is: if the individual
>> > transaction is enough to replace the mempool one, the replacement will
>> > happen during the preceding individual transaction acceptance, and
>> > deduplication logic will work. Otherwise, we will try to deduplicate by
>> > wtxid, see that we need a package witness replacement, and use the
>> package
>> > feerate to evaluate whether this is economically rational.
>>
>> IIUC, you have package A+B, during the dedup phase early in
>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>> and A' is higher feerate than A, you trim A and replace by A' ?
>>
>> I think this approach is safe, the one who appears unsafe to me is when
>> A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
>> In that case iirc that would be a pinning.
>>
>> Good to see progress on witness replacement before we see usage of
>> Taproot tree in the context of multi-party, where a malicious counterparty
>> inflates its witness to jam a honest spending.
>>
>> (Note, the commit linked currently points nowhere :))
>>
>>
>> > Please note that A may replace A' even if A' has higher fees than A
>> > individually, because the proposed package RBF utilizes the fees and
>> size
>> > of the entire package. This just requires E to pay enough fees, although
>> > this can be pretty high if there are also potential B' and C' competing
>> > commitment transactions that we don't know about.
>>
>> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for
>> the individual check on A, the honest package should replace the pinning
>> attempt. I've not fully parsed the proposed implementation yet.
>>
>> Though note, I think it's still unsafe for a Lightning
>> multi-commitment-broadcast-as-one-package as a malicious A' might have an
>> absolute fee higher than E. It sounds uneconomical for
>> an attacker but I think it's not when you consider than you can "batch"
>> attack against multiple honest counterparties. E.g, Mallory broadcast A' +
>> B' + C' + D' where A' conflicts with Alice's honest package P1, B'
>> conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
>> package P3. And D' is a high-fee child of A' + B' + C'.
>>
>> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
>> confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>
>> > So far, my understanding is that multi-parent-1-child is desired for
>> > batched fee-bumping (
>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>> and
>> > I've also seen your response which I have less context on (
>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>> That
>> > being said, I am happy to create a new proposal for 1 parent + 1 child
>> > (which would be slightly simpler) and plan for moving to
>> > multi-parent-1-child later if that is preferred. I am very interested in
>> > hearing feedback on that approach.
>>
>> I think batched fee-bumping is okay as long as you don't have
>> time-sensitive outputs encumbering your commitment transactions. For the
>> reasons mentioned above, I think that's unsafe.
>>
>> What I'm worried about is  L2 developers, potentially not aware about all
>> the mempool subtleties blurring the difference and always batching their
>> broadcast by default.
>>
>> IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
>> constraint L2 design space for now and minimize risks of unsafe usage of
>> the package API :)
>>
>> I think that's a point where it would be relevant to have the opinion of
>> more L2 devs.
>>
>> > I think there is a misunderstanding here - let me describe what I'm
>> > proposing we'd do in this situation: we'll try individual submission
>> for A,
>> > see that it fails due to "insufficient fees." Then, we'll try package
>> > validation for A+B and use package RBF. If A+B pays enough, it can still
>> > replace A'. If A fails for a bad signature, we won't look at B or A+B.
>> Does
>> > this meet your expectations?
>>
>> Yes there was a misunderstanding, I think this approach is correct, it's
>> more a question of performance. Do we assume that broadcasted packages are
>> "honest" by default and that the parent(s) always need the child to pass
>> the fee checks, that way saving the processing of individual transactions
>> which are expected to fail in 99% of cases or more ad hoc composition of
>> packages at relay ?
>>
>> I think this point is quite dependent on the p2p packages format/logic
>> we'll end up on and that we should feel free to revisit it later ?
>>
>>
>> > What problem are you trying to solve by the package feerate *after*
>> dedup
>> rule ?
>> > My understanding is that an in-package transaction might be already in
>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>> vsize of this transaction could be discarded lowering the cost of package
>> RBF.
>>
>> > I'm proposing that, when a transaction has already been submitted to
>> > mempool, we would ignore both its fees and vsize when calculating
>> package
>> > feerate.
>>
>> Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>> discard its feerate as B should pay for all fees checked on its own. Where
>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>> have a fee high enough to cover the bandwidth penalty replacement
>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>
>> If you have a second-layer like current Lightning, you might have a
>> counterparty commitment to replace and should always expect to have to pay
>> for parent replacement bandwidth.
>>
>> Where a potential discount sounds interesting is when you have an
>> univoque state on the first-stage of transactions. E.g DLC's funding
>> transaction which might be CPFP by any participant iirc.
>>
>> > Note that, if C' conflicts with C, it also conflicts with D, since D is
>> a
>> > descendant of C and would thus need to be evicted along with it.
>>
>> Ah once again I think it's a misunderstanding without the code under my
>> eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark
>> for potential eviction D and don't consider it for future conflicts in the
>> rest of the package, I think D' `PreChecks` should be good ?
>>
>> > More generally, this example is surprising to me because I didn't think
>> > packages would be used to fee-bump replaceable transactions. Do we want
>> the
>> > child to be able to replace mempool transactions as well?
>>
>> If we mean when you have replaceable A+B then A'+B' try to replace with a
>> higher-feerate ? I think that's exactly the case we need for Lightning as
>> A+B is coming from Alice and A'+B' is coming from Bob :/
>>
>> > I'm not sure what you mean? Let's say we have a package of parent A +
>> child
>> > B, where A is supposed to replace a mempool transaction A'. Are you
>> saying
>> > that counterparties are able to malleate the package child B, or a
>> child of
>> > A'?
>>
>> The second option, a child of A', In the LN case I think the CPFP is
>> attached on one's anchor output.
>>
>> I think it's good if we assume the
>> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
>> inherited signaling or full-rbf ?
>>
>> > Sorry, I don't understand what you mean by "preserve the package
>> > integrity?" Could you elaborate?
>>
>> After thinking the relaxation about the "new" unconfirmed input is not
>> linked to trimming but I would say more to the multi-parent support.
>>
>> Let's say you have A+B trying to replace C+D where B is also spending
>> already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
>> input as D isn't spending E.
>>
>> So good, I think we agree on the problem description here.
>>
>> > I am in agreement with your calculations but unsure if we disagree on
>> the
>> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>> B's,
>> > it fails the proposed package RBF Rule #2, so this package would be
>> > rejected. Does this meet your expectations?
>>
>> Well what sounds odd to me, in my example, we fail D even if it has a
>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>> fees are 4500 sats ?
>>
>> Is this compatible with a model where a miner prioritizes absolute fees
>> over ancestor score, in the case that mempools aren't full-enough to
>> fulfill a block ?
>>
>> Let me know if I can clarify a point.
>>
>> Antoine
>>
>> Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a
>> écrit :
>>
>>>
>>> Hi Antoine,
>>>
>>> First of all, thank you for the thorough review. I appreciate your
>>> insight on LN requirements.
>>>
>>> > IIUC, you have a package A+B+C submitted for acceptance and A is
>>> already in your mempool. You trim out A from the package and then evaluate
>>> B+C.
>>>
>>> > I think this might be an issue if A is the higher-fee element of the
>>> ABC package. B+C package fees might be under the mempool min fee and will
>>> be rejected, potentially breaking the acceptance expectations of the
>>> package issuer ?
>>>
>>> Correct, if B+C is too low feerate to be accepted, we will reject it. I
>>> prefer this because it is incentive compatible: A can be mined by itself,
>>> so there's no reason to prefer A+B+C instead of A.
>>> As another way of looking at this, consider the case where we do accept
>>> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
>>> capacity, we evict the lowest descendant feerate transactions, which are
>>> B+C in this case. This gives us the same resulting mempool, with A and not
>>> B+C.
>>>
>>>
>>> > Further, I think the dedup should be done on wtxid, as you might have
>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>> different feerates.
>>>
>>> I agree that variations of the same package with different witnesses is
>>> a case that must be handled. I consider witness replacement to be a project
>>> that can be done in parallel to package mempool acceptance because being
>>> able to accept packages does not worsen the problem of a
>>> same-txid-different-witness "pinning" attack.
>>>
>>> If or when we have witness replacement, the logic is: if the individual
>>> transaction is enough to replace the mempool one, the replacement will
>>> happen during the preceding individual transaction acceptance, and
>>> deduplication logic will work. Otherwise, we will try to deduplicate by
>>> wtxid, see that we need a package witness replacement, and use the package
>>> feerate to evaluate whether this is economically rational.
>>>
>>> See the #22290 "handle package transactions already in mempool" commit (
>>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
>>> which handles the case of same-txid-different-witness by simply using the
>>> transaction in the mempool for now, with TODOs for what I just described.
>>>
>>>
>>> > I'm not clearly understanding the accepted topologies. By "parent and
>>> child to share a parent", do you mean the set of transactions A, B, C,
>>> where B is spending A and C is spending A and B would be correct ?
>>>
>>> Yes, that is what I meant. Yes, that would a valid package under these
>>> rules.
>>>
>>> > If yes, is there a width-limit introduced or we fallback on
>>> MAX_PACKAGE_COUNT=25 ?
>>>
>>> No, there is no limit on connectivity other than "child with all
>>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
>>> in-mempool + in-package ancestor limits.
>>>
>>>
>>> > Considering the current Core's mempool acceptance rules, I think CPFP
>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>> jamming successful on one channel commitment transaction would contamine
>>> the remaining commitments sharing the same package.
>>>
>>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>> confirms in the following block,
>>> the propagation and confirmation of B+C+D have been delayed. This could
>>> carry on a loss of funds.
>>>
>>> Please note that A may replace A' even if A' has higher fees than A
>>> individually, because the proposed package RBF utilizes the fees and size
>>> of the entire package. This just requires E to pay enough fees, although
>>> this can be pretty high if there are also potential B' and C' competing
>>> commitment transactions that we don't know about.
>>>
>>>
>>> > IMHO, I'm leaning towards deploying during a first phase
>>> 1-parent/1-child. I think it's the most conservative step still improving
>>> second-layer safety.
>>>
>>> So far, my understanding is that multi-parent-1-child is desired for
>>> batched fee-bumping (
>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>> and I've also seen your response which I have less context on (
>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>> That being said, I am happy to create a new proposal for 1 parent + 1 child
>>> (which would be slightly simpler) and plan for moving to
>>> multi-parent-1-child later if that is preferred. I am very interested in
>>> hearing feedback on that approach.
>>>
>>>
>>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>> acceptance fails. For this reason I think the individual RBF should be
>>> bypassed and only the package RBF apply ?
>>>
>>> I think there is a misunderstanding here - let me describe what I'm
>>> proposing we'd do in this situation: we'll try individual submission for A,
>>> see that it fails due to "insufficient fees." Then, we'll try package
>>> validation for A+B and use package RBF. If A+B pays enough, it can still
>>> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
>>> this meet your expectations?
>>>
>>>
>>> > What problem are you trying to solve by the package feerate *after*
>>> dedup rule ?
>>> > My understanding is that an in-package transaction might be already in
>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>> vsize of this transaction could be discarded lowering the cost of package
>>> RBF.
>>>
>>> I'm proposing that, when a transaction has already been submitted to
>>> mempool, we would ignore both its fees and vsize when calculating package
>>> feerate. In example G2, we shouldn't count M1 fees after its submission to
>>> mempool, since M1's fees have already been used to pay for its individual
>>> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
>>> We also shouldn't count its vsize, since it has already been paid for.
>>>
>>>
>>> > I think this is a footgunish API, as if a package issuer send the
>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>> rejected. So it's breaking the naive broadcaster assumption that a
>>> higher-feerate/higher-fee package always replaces ?
>>>
>>> Note that, if C' conflicts with C, it also conflicts with D, since D is
>>> a descendant of C and would thus need to be evicted along with it.
>>> Implicitly, D' would not be in conflict with D.
>>> More generally, this example is surprising to me because I didn't think
>>> packages would be used to fee-bump replaceable transactions. Do we want the
>>> child to be able to replace mempool transactions as well? This can be
>>> implemented with a bit of additional logic.
>>>
>>> > I think this is unsafe for L2s if counterparties have malleability of
>>> the child transaction. They can block your package replacement by
>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>> ability.
>>>
>>> I'm not sure what you mean? Let's say we have a package of parent A +
>>> child B, where A is supposed to replace a mempool transaction A'. Are you
>>> saying that counterparties are able to malleate the package child B, or a
>>> child of A'? If they can malleate a child of A', that shouldn't matter as
>>> long as A' is signaling replacement. This would be handled identically with
>>> full RBF and what Core currently implements.
>>>
>>> > I think this is an issue brought by the trimming during the dedup
>>> phase. If we preserve the package integrity, only re-using the tx-level
>>> checks results of already in-mempool transactions to gain in CPU time we
>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>
>>> Sorry, I don't understand what you mean by "preserve the package
>>> integrity?" Could you elaborate?
>>>
>>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>> and C pays 1 sat/vb for 1000 vbytes.
>>>
>>> > Package A + B ancestor score is 10 sat/vb.
>>>
>>> > D has a higher feerate/absolute fee than B.
>>>
>>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>>>
>>> I am in agreement with your calculations but unsure if we disagree on
>>> the expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
>>> it fails the proposed package RBF Rule #2, so this package would be
>>> rejected. Does this meet your expectations?
>>>
>>> Thank you for linking to projects that might be interested in package
>>> relay :)
>>>
>>> Thanks,
>>> Gloria
>>>
>>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail•com>
>>> wrote:
>>>
>>>> Hi Gloria,
>>>>
>>>> > A package may contain transactions that are already in the mempool. We
>>>> > remove
>>>> > ("deduplicate") those transactions from the package for the purposes
>>>> of
>>>> > package
>>>> > mempool acceptance. If a package is empty after deduplication, we do
>>>> > nothing.
>>>>
>>>> IIUC, you have a package A+B+C submitted for acceptance and A is
>>>> already in your mempool. You trim out A from the package and then evaluate
>>>> B+C.
>>>>
>>>> I think this might be an issue if A is the higher-fee element of the
>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>> be rejected, potentially breaking the acceptance expectations of the
>>>> package issuer ?
>>>>
>>>> Further, I think the dedup should be done on wtxid, as you might have
>>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>>> different feerates.
>>>>
>>>> E.g you're going to evaluate the package A+B and A' is already in your
>>>> mempool with a bigger valid witness. You trim A based on txid, then you
>>>> evaluate A'+B, which fails the fee checks. However, evaluating A+B would
>>>> have been a success.
>>>>
>>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>>>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>>>> achieve the same goal by bypassing tx-level checks for already-in txn while
>>>> conserving the package integrity for package-level checks ?
>>>>
>>>> > Note that it's possible for the parents to be
>>>> > indirect
>>>> > descendants/ancestors of one another, or for parent and child to
>>>> share a
>>>> > parent,
>>>> > so we cannot make any other topology assumptions.
>>>>
>>>> I'm not clearly understanding the accepted topologies. By "parent and
>>>> child to share a parent", do you mean the set of transactions A, B, C,
>>>> where B is spending A and C is spending A and B would be correct ?
>>>>
>>>> If yes, is there a width-limit introduced or we fallback on
>>>> MAX_PACKAGE_COUNT=25 ?
>>>>
>>>> IIRC, one rationale to come with this topology limitation was to lower
>>>> the DoS risks when potentially deploying p2p packages.
>>>>
>>>> Considering the current Core's mempool acceptance rules, I think CPFP
>>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>> jamming successful on one channel commitment transaction would contamine
>>>> the remaining commitments sharing the same package.
>>>>
>>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>>> confirms in the following block,
>>>> the propagation and confirmation of B+C+D have been delayed. This could
>>>> carry on a loss of funds.
>>>>
>>>> That said, if you're broadcasting commitment transactions without
>>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>>> saving as you don't have to duplicate the CPFP.
>>>>
>>>> IMHO, I'm leaning towards deploying during a first phase
>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>> second-layer safety.
>>>>
>>>> > *Rationale*:  It would be incorrect to use the fees of transactions
>>>> that are
>>>> > already in the mempool, as we do not want a transaction's fees to be
>>>> > double-counted for both its individual RBF and package RBF.
>>>>
>>>> I'm unsure about the logical order of the checks proposed.
>>>>
>>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats
>>>> and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
>>>> fails. For this reason I think the individual RBF should be bypassed and
>>>> only the package RBF apply ?
>>>>
>>>> Note this situation is plausible, with current LN design, your
>>>> counterparty can have a commitment transaction with a better fee just by
>>>> selecting a higher `dust_limit_satoshis` than yours.
>>>>
>>>> > Examples F and G [14] show the same package, but P1 is submitted
>>>> > individually before
>>>> > the package in example G. In example F, we can see that the 300vB
>>>> package
>>>> > pays
>>>> > an additional 200sat in fees, which is not enough to pay for its own
>>>> > bandwidth
>>>> > (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>> M1, but
>>>> > using P1's fees again during package submission would make it look
>>>> like a
>>>> > 300sat
>>>> > increase for a 200vB package. Even including its fees and size would
>>>> not be
>>>> > sufficient in this example, since the 300sat looks like enough for
>>>> the 300vB
>>>> > package. The calculcation after deduplication is 100sat increase for a
>>>> > package
>>>> > of size 200vB, which correctly fails BIP125#4. Assume all
>>>> transactions have
>>>> > a
>>>> > size of 100vB.
>>>>
>>>> What problem are you trying to solve by the package feerate *after*
>>>> dedup rule ?
>>>>
>>>> My understanding is that an in-package transaction might be already in
>>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>>> vsize of this transaction could be discarded lowering the cost of package
>>>> RBF.
>>>>
>>>> If we keep a "safe" dedup mechanism (see my point above), I think this
>>>> discount is justified, as the validation cost of node operators is paid for
>>>> ?
>>>>
>>>> > The child cannot replace mempool transactions.
>>>>
>>>> Let's say you issue package A+B, then package C+B', where B' is a child
>>>> of both A and C. This rule fails the acceptance of C+B' ?
>>>>
>>>> I think this is a footgunish API, as if a package issuer send the
>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>>> in protocols where states are symmetric. E.g a malicious counterparty
>>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>>> fees.
>>>>
>>>> > All mempool transactions to be replaced must signal replaceability.
>>>>
>>>> I think this is unsafe for L2s if counterparties have malleability of
>>>> the child transaction. They can block your package replacement by
>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>> ability.
>>>>
>>>> I think it's better to either fix inherited signaling or move towards
>>>> full-rbf.
>>>>
>>>> > if a package parent has already been submitted, it would
>>>> > look
>>>> >like the child is spending a "new" unconfirmed input.
>>>>
>>>> I think this is an issue brought by the trimming during the dedup
>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>
>>>> > However, we still achieve the same goal of requiring the
>>>> > replacement
>>>> > transactions to have a ancestor score at least as high as the original
>>>> > ones.
>>>>
>>>> I'm not sure if this holds...
>>>>
>>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>
>>>> Package A + B ancestor score is 10 sat/vb.
>>>>
>>>> D has a higher feerate/absolute fee than B.
>>>>
>>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>> 1000 sats + D's 1500 sats) /
>>>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>
>>>> Overall, this is a review through the lenses of LN requirements. I
>>>> think other L2 protocols/applications
>>>> could be candidates to using package accept/relay such as:
>>>> * https://github.com/lightninglabs/pool
>>>> * https://github.com/discreetlogcontracts/dlcspecs
>>>> * https://github.com/bitcoin-teleport/teleport-transactions/
>>>> * https://github.com/sapio-lang/sapio
>>>> *
>>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>>>> * https://github.com/revault/practical-revault
>>>>
>>>> Thanks for rolling forward the ball on this subject.
>>>>
>>>> Antoine
>>>>
>>>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>
>>>>> Hi there,
>>>>>
>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>> package
>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>> would not
>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>> significantly affects transaction propagation, I believe this is
>>>>> relevant for
>>>>> the mailing list.
>>>>>
>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>> child. If you
>>>>> develop software that relies on specific transaction relay assumptions
>>>>> and/or
>>>>> are interested in using package relay in the future, I'm very
>>>>> interested to hear
>>>>> your feedback on the utility or restrictiveness of these package
>>>>> policies for
>>>>> your use cases.
>>>>>
>>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>>> PR#22290][1].
>>>>>
>>>>> An illustrated version of this post can be found at
>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>> I have also linked the images below.
>>>>>
>>>>> ## Background
>>>>>
>>>>> Feel free to skip this section if you are already familiar with
>>>>> mempool policy
>>>>> and package relay terminology.
>>>>>
>>>>> ### Terminology Clarifications
>>>>>
>>>>> * Package = an ordered list of related transactions, representable by
>>>>> a Directed
>>>>>   Acyclic Graph.
>>>>> * Package Feerate = the total modified fees divided by the total
>>>>> virtual size of
>>>>>   all transactions in the package.
>>>>>     - Modified fees = a transaction's base fees + fee delta applied by
>>>>> the user
>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>> across
>>>>> mempools.
>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>> [BIP141
>>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>>> Core][3].
>>>>>     - Note that feerate is not necessarily based on the base fees and
>>>>> serialized
>>>>>       size.
>>>>>
>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>> incentives to
>>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>>> Child Pays
>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention
>>>>> in
>>>>> mempool policy is to recognize when the new transaction is more
>>>>> economical to
>>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>>> some
>>>>> limitations.
>>>>>
>>>>> ### Policy
>>>>>
>>>>> The purpose of the mempool is to store the best (to be most
>>>>> incentive-compatible
>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>> Miners use
>>>>> the mempool to build block templates. The mempool is also useful as a
>>>>> cache for
>>>>> boosting block relay and validation performance, aiding transaction
>>>>> relay, and
>>>>> generating feerate estimations.
>>>>>
>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>> should make it
>>>>> to miners through normal transaction relay, without any special
>>>>> connectivity or
>>>>> relationships with miners. On the other hand, nodes do not have
>>>>> unlimited
>>>>> resources, and a P2P network designed to let any honest node broadcast
>>>>> their
>>>>> transactions also exposes the transaction validation engine to DoS
>>>>> attacks from
>>>>> malicious peers.
>>>>>
>>>>> As such, for unconfirmed transactions we are considering for our
>>>>> mempool, we
>>>>> apply a set of validation rules in addition to consensus, primarily to
>>>>> protect
>>>>> us from resource exhaustion and aid our efforts to keep the highest fee
>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>> node-specific) rules that transactions must abide by in order to be
>>>>> accepted
>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>> restrictions such
>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>
>>>>> ### Package Relay and Package Mempool Accept
>>>>>
>>>>> In transaction relay, we currently consider transactions one at a time
>>>>> for
>>>>> submission to the mempool. This creates a limitation in the node's
>>>>> ability to
>>>>> determine which transactions have the highest feerates, since we
>>>>> cannot take
>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>> transactions are
>>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>>> when
>>>>> considering it for RBF. When an individual transaction does not meet
>>>>> the mempool
>>>>> minimum feerate and the user isn't able to create a replacement
>>>>> transaction
>>>>> directly, it will not be accepted by mempools.
>>>>>
>>>>> This limitation presents a security issue for applications and users
>>>>> relying on
>>>>> time-sensitive transactions. For example, Lightning and other
>>>>> protocols create
>>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>>> path opens
>>>>> up after a timelock, and users are protected from cheating scenarios
>>>>> as long as
>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>> parties'
>>>>> transactions will propagate and confirm in a timely manner. This
>>>>> assumption can
>>>>> be broken if fee-bumping does not work as intended.
>>>>>
>>>>> The end goal for Package Relay is to consider multiple transactions at
>>>>> the same
>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>> better
>>>>> determine whether transactions should be accepted to our mempool,
>>>>> especially if
>>>>> they don't meet fee requirements individually or are better RBF
>>>>> candidates as a
>>>>> package. A combination of changes to mempool validation logic, policy,
>>>>> and
>>>>> transaction relay allows us to better propagate the transactions with
>>>>> the
>>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>>> powerful
>>>>> for users.
>>>>>
>>>>> The "relay" part of Package Relay suggests P2P messaging changes, but
>>>>> a large
>>>>> part of the changes are in the mempool's package validation logic. We
>>>>> call this
>>>>> *Package Mempool Accept*.
>>>>>
>>>>> ### Previous Work
>>>>>
>>>>> * Given that mempool validation is DoS-sensitive and complex, it would
>>>>> be
>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>> efforts have
>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>> [#21062][5],
>>>>> [#22675][6], [#22796][7]).
>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>> accepts only
>>>>>   (no submission to mempool).
>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks for
>>>>> arbitrary
>>>>>   packages. Still test accepts only.
>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>
>>>>> ### Existing Package Rules
>>>>>
>>>>> These are in master as introduced in [#20833][8] and [#21800][9]. I'll
>>>>> consider
>>>>> them as "given" in the rest of this document, though they can be
>>>>> changed, since
>>>>> package validation is test-accept only right now.
>>>>>
>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>
>>>>>    *Rationale*: This is already enforced as mempool
>>>>> ancestor/descendant limits.
>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>> this limit
>>>>> would mean that the package can either be split up or it wouldn't pass
>>>>> this
>>>>> mempool policy.
>>>>>
>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>> between
>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>
>>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>>> can spend
>>>>> the same inputs. This also means there cannot be duplicate
>>>>> transactions. [8]
>>>>>
>>>>> 4. When packages are evaluated against ancestor/descendant limits in a
>>>>> test
>>>>> accept, the union of all of their descendants and ancestors is
>>>>> considered. This
>>>>> is essentially a "worst case" heuristic where every transaction in the
>>>>> package
>>>>> is treated as each other's ancestor and descendant. [8]
>>>>> Packages for which ancestor/descendant limits are accurately captured
>>>>> by this
>>>>> heuristic: [19]
>>>>>
>>>>> There are also limitations such as the fact that CPFP carve out is not
>>>>> applied
>>>>> to package transactions. #20833 also disables RBF in package
>>>>> validation; this
>>>>> proposal overrides that to allow packages to use RBF.
>>>>>
>>>>> ## Proposed Changes
>>>>>
>>>>> The next step in the Package Mempool Accept project is to implement
>>>>> submission
>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>> submission
>>>>> logic before exposing it on P2P.
>>>>>
>>>>> ### Summary
>>>>>
>>>>> - Packages may contain already-in-mempool transactions.
>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>> - Fee-related checks use the package feerate. This means that wallets
>>>>> can
>>>>> create a package that utilizes CPFP.
>>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>>> similar
>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>
>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>> feedback is
>>>>> always welcome.
>>>>>
>>>>> ### Details
>>>>>
>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>
>>>>> A package may contain transactions that are already in the mempool. We
>>>>> remove
>>>>> ("deduplicate") those transactions from the package for the purposes
>>>>> of package
>>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>>> nothing.
>>>>>
>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>> parent to be
>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>> policy and
>>>>> fee market fluctuations. We should not reject or penalize the entire
>>>>> package for
>>>>> an individual transaction as that could be a censorship vector.
>>>>>
>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>
>>>>> Only packages of a specific topology are permitted. Namely, a package
>>>>> is exactly
>>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>>> package
>>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>>> unconfirmed parents, etc. Note that it's possible for the parents to
>>>>> be indirect
>>>>> descendants/ancestors of one another, or for parent and child to share
>>>>> a parent,
>>>>> so we cannot make any other topology assumptions.
>>>>>
>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>> parents
>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>> packages to a
>>>>> defined topology is also easier to reason about and simplifies the
>>>>> validation
>>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>>> as one big
>>>>> transaction, where:
>>>>>
>>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>>> from
>>>>>   confirmed UTXOs
>>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>>> that
>>>>>   aren't spent by other transactions in the package
>>>>>
>>>>> Examples of packages that follow this rule (variations of example A
>>>>> show some
>>>>> possibilities after deduplication): ![image][15]
>>>>>
>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>
>>>>> Package Feerate = the total modified fees divided by the total virtual
>>>>> size of
>>>>> all transactions in the package.
>>>>>
>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>> pre-configured
>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>> feerate, the
>>>>> total package feerate is used instead of the individual feerate. The
>>>>> individual
>>>>> transactions are allowed to be below feerate requirements if the
>>>>> package meets
>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>> can have 0
>>>>> fees but be paid for by the child.
>>>>>
>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>> solving the
>>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>>> applications to adjust their fees at broadcast time instead of
>>>>> overshooting or
>>>>> risking getting stuck/pinned.
>>>>>
>>>>> We use the package feerate of the package *after deduplication*.
>>>>>
>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>> that are
>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>> double-counted for both its individual RBF and package RBF.
>>>>>
>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>> individually before
>>>>> the package in example G. In example F, we can see that the 300vB
>>>>> package pays
>>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>>> bandwidth
>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>> M1, but
>>>>> using P1's fees again during package submission would make it look
>>>>> like a 300sat
>>>>> increase for a 200vB package. Even including its fees and size would
>>>>> not be
>>>>> sufficient in this example, since the 300sat looks like enough for the
>>>>> 300vB
>>>>> package. The calculcation after deduplication is 100sat increase for a
>>>>> package
>>>>> of size 200vB, which correctly fails BIP125#4. Assume all transactions
>>>>> have a
>>>>> size of 100vB.
>>>>>
>>>>> #### Package RBF
>>>>>
>>>>> If a package meets feerate requirements as a package, the parents in
>>>>> the
>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>> child cannot
>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>> same
>>>>> transaction, but in order to be valid, none of the transactions can
>>>>> try to
>>>>> replace an ancestor of another transaction in the same package (which
>>>>> would thus
>>>>> make its inputs unavailable).
>>>>>
>>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>>> propagate
>>>>> as intended if RBF still requires each individual transaction to meet
>>>>> the
>>>>> feerate requirements.
>>>>>
>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>
>>>>> ##### Signaling (Rule #1)
>>>>>
>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>
>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>> package RBF and
>>>>> single transaction acceptance. This would be updated if single
>>>>> transaction
>>>>> validation moves to full RBF.
>>>>>
>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>
>>>>> A package may include new unconfirmed inputs, but the ancestor feerate
>>>>> of the
>>>>> child must be at least as high as the ancestor feerates of every
>>>>> transaction
>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>> replacement
>>>>> transaction may only include an unconfirmed input if that input was
>>>>> included in
>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>> output from a
>>>>> currently-unconfirmed transaction.)"
>>>>>
>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>>> transaction has a higher ancestor score than the original
>>>>> transaction(s) (see
>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>> input can lower the
>>>>> ancestor score of the replacement transaction. P1 is trying to replace
>>>>> M1, and
>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>> and M2 pays
>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>> isolation, P1
>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>> M2, so its
>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>> ancestor
>>>>> feerate, which is 6sat/vB.
>>>>>
>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>> transactions in the package can spend new unconfirmed inputs." Example
>>>>> J [17] shows
>>>>> why, if any of the package transactions have ancestors, package
>>>>> feerate is no
>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>>> is the
>>>>> replacement transaction in an RBF), we're actually interested in the
>>>>> entire
>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>> P1, P2, and
>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>>> only allow
>>>>> the child to have new unconfirmed inputs, either, because it can still
>>>>> cause us
>>>>> to overestimate the package's ancestor score.
>>>>>
>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>> Package RBF
>>>>> less useful, but would also break Package RBF for packages with
>>>>> parents already
>>>>> in the mempool: if a package parent has already been submitted, it
>>>>> would look
>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>> [18], we're
>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>> P3. We must
>>>>> consider the case where one of the parents is already in the mempool
>>>>> (in this
>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>> inputs. However,
>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>> replace M1
>>>>> with this package.
>>>>>
>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>> strict than
>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>> replacement
>>>>> transactions to have a ancestor score at least as high as the original
>>>>> ones. As
>>>>> a result, the entire package is required to be a higher feerate mining
>>>>> candidate
>>>>> than each of the replaced transactions.
>>>>>
>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>> original RBF
>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>
>>>>> ##### Absolute Fee (Rule #3)
>>>>>
>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>> total fees
>>>>> of the package must be higher than the absolute fees of the mempool
>>>>> transactions
>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>> BIP125 Rule #3
>>>>> - an individual transaction in the package may have lower fees than the
>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and the
>>>>> child
>>>>> pays for RBF.
>>>>>
>>>>> ##### Feerate (Rule #4)
>>>>>
>>>>> The package must pay for its own bandwidth; the package feerate must
>>>>> be higher
>>>>> than the replaced transactions by at least minimum relay feerate
>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>> differs from
>>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>>> lower
>>>>> feerate than the transaction(s) it is replacing. In fact, it may have
>>>>> 0 fees,
>>>>> and the child pays for RBF.
>>>>>
>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>
>>>>> The package cannot replace more than 100 mempool transactions. This is
>>>>> identical
>>>>> to BIP125 Rule #5.
>>>>>
>>>>> ### Expected FAQs
>>>>>
>>>>> 1. Is it possible for only some of the package to make it into the
>>>>> mempool?
>>>>>
>>>>>    Yes, it is. However, since we evict transactions from the mempool by
>>>>> descendant score and the package child is supposed to be sponsoring
>>>>> the fees of
>>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>>> incentive-compatible. In fact, to be conservative, package validation
>>>>> should
>>>>> begin by trying to submit all of the transactions individually, and
>>>>> only use the
>>>>> package mempool acceptance logic if the parents fail due to low
>>>>> feerate.
>>>>>
>>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>>
>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>> aren't able to
>>>>> tell with 100% confidence if we are looking at a transaction that has
>>>>> already
>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>> historical
>>>>> block data, it's possible to look for it, but this is inefficient, not
>>>>> always
>>>>> possible for pruning nodes, and unnecessary because we're not going to
>>>>> do
>>>>> anything with the transaction anyway. As such, we already have the
>>>>> expectation
>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>> relaying
>>>>> transactions that have already been confirmed. Similarly, we shouldn't
>>>>> be
>>>>> relaying packages that contain already-confirmed transactions.
>>>>>
>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>> [2]:
>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>> [3]:
>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>> [13]:
>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>> [14]:
>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>> [15]:
>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>> [16]:
>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>> [17]:
>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>> [18]:
>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>> [19]:
>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>> [20]:
>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>> _______________________________________________
>>>>> bitcoin-dev mailing list
>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>
>>>>

[-- Attachment #2: Type: text/html, Size: 67990 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-26 21:10         ` Antoine Riard
@ 2021-09-27  7:15           ` Bastien TEINTURIER
  2021-09-28 22:59             ` Antoine Riard
  0 siblings, 1 reply; 16+ messages in thread
From: Bastien TEINTURIER @ 2021-09-27  7:15 UTC (permalink / raw)
  To: Antoine Riard, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 67957 bytes --]

>
> I think we could restrain package acceptance to only confirmed inputs for
> now and revisit later this point ? For LN-anchor, you can assume that the
> fee-bumping UTXO feeding the CPFP is already
> confirmed. Or are there currently-deployed use-cases which would benefit
> from your proposed Rule #2 ?
>

I think constraining package acceptance to only confirmed inputs
is very limiting and quite dangerous for L2 protocols.

In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

Bastien

Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> a écrit :

> Hi Gloria,
>
> Thanks for your answers,
>
> > In summary, it seems that the decisions that might still need
> > attention/input from devs on this mailing list are:
> > 1. Whether we should start with multiple-parent-1-child or
> 1-parent-1-child.
> > 2. Whether it's ok to require that the child not have conflicts with
> > mempool transactions.
>
> Yes 1) it would be good to have inputs of more potential users of package
> acceptance . And 2) I think it's more a matter of clearer wording of the
> proposal.
>
> However, see my final point on the relaxation around "unconfirmed inputs"
> which might in fact alter our current block construction strategy.
>
> > Right, the fact that we essentially always choose the first-seen witness
> is
> > an unfortunate limitation that exists already. Adding package mempool
> > accept doesn't worsen this, but the procedure in the future is to replace
> > the witness when it makes sense economically. We can also add logic to
> > allow package feerate to pay for witness replacements as well. This is
> > pretty far into the future, though.
>
> Yes I agree package mempool doesn't worsen this. And it's not an issue for
> current LN as you can't significantly inflate a spending witness for the
> 2-of-2 funding output.
> However, it might be an issue for multi-party protocol where the spending
> script has alternative branches with asymmetric valid witness weights.
> Taproot should ease that kind of script so hopefully we would deploy
> wtxid-replacement not too far in the future.
>
> > I could be misunderstanding, but an attacker wouldn't be able to
> > batch-attack like this. Alice's package only conflicts with A' + D', not
> A'
> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>
> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
> consider.
>
> In LN, if you're trying to confirm a commitment transaction to time-out or
> claim on-chain a HTLC and the timelock is near-expiration, you should be
> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
> value offered by the HTLC.
>
> Following this security assumption, an attacker can exploit it by
> targeting together commitment transactions from different channels by
> blocking them under a high-fee child, of which the fee value
> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
> from observing mempools state, victims can't learn they're targeted by the
> same attacker.
>
> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
> at loss on stealing P1, as she has paid more in fees but will realize a
> gain on P2+P3.
>
> In this model, Alice is allowed to evict those 2 transactions (A' + D')
> but as she is economically-bounded she won't succeed.
>
> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
> 1st pinning scenario is correct and "lucractive" when you sum the global
> gain/loss.
>
> There is a 2nd attack scenario where A + B + C + D, where D is the child
> of A,B,C. All those transactions are honestly issued by Alice. Once A + B +
> C + D are propagated in network mempools, Mallory is able to replace A + D
> with  A' + D' where D' is paying a higher fee. This package A' + D' will
> confirm soon if D feerate was compelling but Mallory succeeds in delaying
> the confirmation
> of B + C for one or more blocks. As B + C are pre-signed commitments with
> a low-fee rate they won't confirm without Alice issuing a new child E.
> Mallory can repeat the same trick by broadcasting
> B' + E' and delay again the confirmation of C.
>
> If the remaining package pending HTLC has a higher-value than all the
> malicious fees over-bid, Mallory should realize a gain. With this 2nd
> pinning attack, the malicious entity buys confirmation delay of your
> packaged-together commitments.
>
> Assuming those attacks are correct, I'm leaning towards being conservative
> with the LDK broadcast backend. Though once again, other L2 devs have
> likely other use-cases and opinions :)
>
> >  B' only needs to pay for itself in this case.
>
> Yes I think it's a nice discount when UTXO is single-owned. In the context
> of shared-owned UTXO (e.g LN), you might not if there is an in-mempool
> package already spending the UTXO and have to assume the worst-case
> scenario. I.e have B' committing enough fee to pay for A' replacement
> bandwidth. I think we can't do that much for this case...
>
> > If a package meets feerate requirements as a
> package, the parents in the transaction are allowed to replace-by-fee
> mempool transactions. The child cannot replace mempool transactions."
>
> I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B' to
> replace A+B because the first broadcast isn't satisfying anymore due to
> mempool spikes ? Assuming B' fees is enough, I think that case as child B'
> replacing in-mempool transaction B. Which I understand going against  "The
> child cannot replace mempool transactions".
>
> Maybe wording could be a bit clearer ?
>
> > While it would be nice to have full RBF, malleability of the child won't
> > block RBF here. If we're trying to replace A', we only require that A'
> > signals replaceability, and don't mind if its child doesn't.
>
> Yes, it sounds good.
>
> > Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
> miner
> > should prefer to utilize their block space more effectively.
>
> If your mempool is empty and only composed of A+C+D or A+B, I think taking
> A+C+D is the most efficient block construction you can come up with as a
> miner ?
>
> > No, because we don't use that model.
>
> Can you describe what miner model we are using ? Like the block
> construction strategy implemented by `addPackagesTxs` or also encompassing
> our current mempool acceptance policy, which I think rely on absolute fee
> over ancestor score in case of replacement ?
>
> I think this point is worthy to discuss as otherwise we might downgrade
> the efficiency of our current block construction strategy in periods of
> near-empty mempools. A knowledge which could be discreetly leveraged by a
> miner to gain an advantage on the rest of the mining ecosystem.
>
> Note, I think we *might* have to go in this direction if we want to
> replace replace-by-fee by replace-by-feerate or replace-by-ancestor and
> solve in-depth pinning attacks. Though if we do so,
> IMO we would need more thoughts.
>
> I think we could restrain package acceptance to only confirmed inputs for
> now and revisit later this point ? For LN-anchor, you can assume that the
> fee-bumping UTXO feeding the CPFP is already
> confirmed. Or are there currently-deployed use-cases which would benefit
> from your proposed Rule #2 ?
>
> Antoine
>
> Le jeu. 23 sept. 2021 à 11:36, Gloria Zhao <gloriajzhao@gmail•com> a
> écrit :
>
>> Hi Antoine,
>>
>> Thanks as always for your input. I'm glad we agree on so much!
>>
>> In summary, it seems that the decisions that might still need
>> attention/input from devs on this mailing list are:
>> 1. Whether we should start with multiple-parent-1-child or
>> 1-parent-1-child.
>> 2. Whether it's ok to require that the child not have conflicts with
>> mempool transactions.
>>
>> Responding to your comments...
>>
>> > IIUC, you have package A+B, during the dedup phase early in
>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>> and A' is higher feerate than A, you trim A and replace by A' ?
>>
>> > I think this approach is safe, the one who appears unsafe to me is when
>> A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
>> In that case iirc that would be a pinning.
>>
>> Right, the fact that we essentially always choose the first-seen witness
>> is an unfortunate limitation that exists already. Adding package mempool
>> accept doesn't worsen this, but the procedure in the future is to replace
>> the witness when it makes sense economically. We can also add logic to
>> allow package feerate to pay for witness replacements as well. This is
>> pretty far into the future, though.
>>
>> > It sounds uneconomical for an attacker but I think it's not when you
>> consider than you can "batch" attack against multiple honest
>> counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
>> with Alice's honest package P1, B' conflicts with Bob's honest package P2,
>> C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
>> A' + B' + C'.
>>
>> > If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of
>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>
>> I could be misunderstanding, but an attacker wouldn't be able to
>> batch-attack like this. Alice's package only conflicts with A' + D', not A'
>> + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>
>> > Do we assume that broadcasted packages are "honest" by default and that
>> the parent(s) always need the child to pass the fee checks, that way saving
>> the processing of individual transactions which are expected to fail in 99%
>> of cases or more ad hoc composition of packages at relay ?
>> > I think this point is quite dependent on the p2p packages format/logic
>> we'll end up on and that we should feel free to revisit it later ?
>>
>> I think it's the opposite; there's no way for us to assume that p2p
>> packages will be "honest." I'd like to have two things before we expose on
>> P2P: (1) ensure that the amount of resources potentially allocated for
>> package validation isn't disproportionately higher than that of single
>> transaction validation and (2) only use package validation when we're
>> unsatisifed with the single validation result, e.g. we might get better
>> fees.
>> Yes, let's revisit this later :)
>>
>>  > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>> discard its feerate as B should pay for all fees checked on its own. Where
>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>> have a fee high enough to cover the bandwidth penalty replacement
>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>
>>  B' only needs to pay for itself in this case.
>>
>> > > Do we want the child to be able to replace mempool transactions as
>> well?
>>
>> > If we mean when you have replaceable A+B then A'+B' try to replace with
>> a higher-feerate ? I think that's exactly the case we need for Lightning as
>> A+B is coming from Alice and A'+B' is coming from Bob :/
>>
>> Let me clarify this because I can see that my wording was ambiguous, and
>> then please let me know if it fits Lightning's needs?
>>
>> In my proposal, I wrote "If a package meets feerate requirements as a
>> package, the parents in the transaction are allowed to replace-by-fee
>> mempool transactions. The child cannot replace mempool transactions." What
>> I meant was: the package can replace mempool transactions if any of the
>> parents conflict with mempool transactions. The child cannot not conflict
>> with any mempool transactions.
>> The Lightning use case this attempts to address is: Alice and Mallory are
>> LN counterparties, and have packages A+B and A'+B', respectively. A and A'
>> are their commitment transactions and conflict with each other; they have
>> shared inputs and different txids.
>> B spends Alice's anchor output from A. B' spends Mallory's anchor output
>> from A'. Thus, B and B' do not conflict with each other.
>> Alice can broadcast her package, A+B, to replace Mallory's package,
>> A'+B', since B doesn't conflict with the mempool.
>>
>> Would this be ok?
>>
>> > The second option, a child of A', In the LN case I think the CPFP is
>> attached on one's anchor output.
>>
>> While it would be nice to have full RBF, malleability of the child won't
>> block RBF here. If we're trying to replace A', we only require that A'
>> signals replaceability, and don't mind if its child doesn't.
>>
>> > > B has an ancestor score of 10sat/vb and D has an
>> > > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>> B's,
>> > > it fails the proposed package RBF Rule #2, so this package would be
>> > > rejected. Does this meet your expectations?
>>
>> > Well what sounds odd to me, in my example, we fail D even if it has a
>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>> fees are 4500 sats ?
>>
>> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
>> miner should prefer to utilize their block space more effectively.
>>
>> > Is this compatible with a model where a miner prioritizes absolute fees
>> over ancestor score, in the case that mempools aren't full-enough to
>> fulfill a block ?
>>
>> No, because we don't use that model.
>>
>> Thanks,
>> Gloria
>>
>> On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail•com>
>> wrote:
>>
>>> > Correct, if B+C is too low feerate to be accepted, we will reject it. I
>>> > prefer this because it is incentive compatible: A can be mined by
>>> itself,
>>> > so there's no reason to prefer A+B+C instead of A.
>>> > As another way of looking at this, consider the case where we do accept
>>> > A+B+C and it sits at the "bottom" of our mempool. If our mempool
>>> reaches
>>> > capacity, we evict the lowest descendant feerate transactions, which
>>> are
>>> > B+C in this case. This gives us the same resulting mempool, with A and
>>> not
>>> > B+C.
>>>
>>> I agree here. Doing otherwise, we might evict other transactions mempool
>>> in `MempoolAccept::Finalize` with a higher-feerate than B+C while those
>>> evicted transactions are the most compelling for block construction.
>>>
>>> I thought at first missing this acceptance requirement would break a
>>> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
>>> attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
>>> child fee is capturing the parent value. I can't think of other fee-bumping
>>> schemes potentially affected. If they do exist I would say they're wrong in
>>> their design assumptions.
>>>
>>> > If or when we have witness replacement, the logic is: if the individual
>>> > transaction is enough to replace the mempool one, the replacement will
>>> > happen during the preceding individual transaction acceptance, and
>>> > deduplication logic will work. Otherwise, we will try to deduplicate by
>>> > wtxid, see that we need a package witness replacement, and use the
>>> package
>>> > feerate to evaluate whether this is economically rational.
>>>
>>> IIUC, you have package A+B, during the dedup phase early in
>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>>> and A' is higher feerate than A, you trim A and replace by A' ?
>>>
>>> I think this approach is safe, the one who appears unsafe to me is when
>>> A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
>>> In that case iirc that would be a pinning.
>>>
>>> Good to see progress on witness replacement before we see usage of
>>> Taproot tree in the context of multi-party, where a malicious counterparty
>>> inflates its witness to jam a honest spending.
>>>
>>> (Note, the commit linked currently points nowhere :))
>>>
>>>
>>> > Please note that A may replace A' even if A' has higher fees than A
>>> > individually, because the proposed package RBF utilizes the fees and
>>> size
>>> > of the entire package. This just requires E to pay enough fees,
>>> although
>>> > this can be pretty high if there are also potential B' and C' competing
>>> > commitment transactions that we don't know about.
>>>
>>> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for
>>> the individual check on A, the honest package should replace the pinning
>>> attempt. I've not fully parsed the proposed implementation yet.
>>>
>>> Though note, I think it's still unsafe for a Lightning
>>> multi-commitment-broadcast-as-one-package as a malicious A' might have an
>>> absolute fee higher than E. It sounds uneconomical for
>>> an attacker but I think it's not when you consider than you can "batch"
>>> attack against multiple honest counterparties. E.g, Mallory broadcast A' +
>>> B' + C' + D' where A' conflicts with Alice's honest package P1, B'
>>> conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
>>> package P3. And D' is a high-fee child of A' + B' + C'.
>>>
>>> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
>>> confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>>
>>> > So far, my understanding is that multi-parent-1-child is desired for
>>> > batched fee-bumping (
>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>> and
>>> > I've also seen your response which I have less context on (
>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>> That
>>> > being said, I am happy to create a new proposal for 1 parent + 1 child
>>> > (which would be slightly simpler) and plan for moving to
>>> > multi-parent-1-child later if that is preferred. I am very interested
>>> in
>>> > hearing feedback on that approach.
>>>
>>> I think batched fee-bumping is okay as long as you don't have
>>> time-sensitive outputs encumbering your commitment transactions. For the
>>> reasons mentioned above, I think that's unsafe.
>>>
>>> What I'm worried about is  L2 developers, potentially not aware about
>>> all the mempool subtleties blurring the difference and always batching
>>> their broadcast by default.
>>>
>>> IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
>>> constraint L2 design space for now and minimize risks of unsafe usage of
>>> the package API :)
>>>
>>> I think that's a point where it would be relevant to have the opinion of
>>> more L2 devs.
>>>
>>> > I think there is a misunderstanding here - let me describe what I'm
>>> > proposing we'd do in this situation: we'll try individual submission
>>> for A,
>>> > see that it fails due to "insufficient fees." Then, we'll try package
>>> > validation for A+B and use package RBF. If A+B pays enough, it can
>>> still
>>> > replace A'. If A fails for a bad signature, we won't look at B or A+B.
>>> Does
>>> > this meet your expectations?
>>>
>>> Yes there was a misunderstanding, I think this approach is correct, it's
>>> more a question of performance. Do we assume that broadcasted packages are
>>> "honest" by default and that the parent(s) always need the child to pass
>>> the fee checks, that way saving the processing of individual transactions
>>> which are expected to fail in 99% of cases or more ad hoc composition of
>>> packages at relay ?
>>>
>>> I think this point is quite dependent on the p2p packages format/logic
>>> we'll end up on and that we should feel free to revisit it later ?
>>>
>>>
>>> > What problem are you trying to solve by the package feerate *after*
>>> dedup
>>> rule ?
>>> > My understanding is that an in-package transaction might be already in
>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>> vsize of this transaction could be discarded lowering the cost of package
>>> RBF.
>>>
>>> > I'm proposing that, when a transaction has already been submitted to
>>> > mempool, we would ignore both its fees and vsize when calculating
>>> package
>>> > feerate.
>>>
>>> Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>>> discard its feerate as B should pay for all fees checked on its own. Where
>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>>> have a fee high enough to cover the bandwidth penalty replacement
>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>>
>>> If you have a second-layer like current Lightning, you might have a
>>> counterparty commitment to replace and should always expect to have to pay
>>> for parent replacement bandwidth.
>>>
>>> Where a potential discount sounds interesting is when you have an
>>> univoque state on the first-stage of transactions. E.g DLC's funding
>>> transaction which might be CPFP by any participant iirc.
>>>
>>> > Note that, if C' conflicts with C, it also conflicts with D, since D
>>> is a
>>> > descendant of C and would thus need to be evicted along with it.
>>>
>>> Ah once again I think it's a misunderstanding without the code under my
>>> eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark
>>> for potential eviction D and don't consider it for future conflicts in the
>>> rest of the package, I think D' `PreChecks` should be good ?
>>>
>>> > More generally, this example is surprising to me because I didn't think
>>> > packages would be used to fee-bump replaceable transactions. Do we
>>> want the
>>> > child to be able to replace mempool transactions as well?
>>>
>>> If we mean when you have replaceable A+B then A'+B' try to replace with
>>> a higher-feerate ? I think that's exactly the case we need for Lightning as
>>> A+B is coming from Alice and A'+B' is coming from Bob :/
>>>
>>> > I'm not sure what you mean? Let's say we have a package of parent A +
>>> child
>>> > B, where A is supposed to replace a mempool transaction A'. Are you
>>> saying
>>> > that counterparties are able to malleate the package child B, or a
>>> child of
>>> > A'?
>>>
>>> The second option, a child of A', In the LN case I think the CPFP is
>>> attached on one's anchor output.
>>>
>>> I think it's good if we assume the
>>> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
>>> inherited signaling or full-rbf ?
>>>
>>> > Sorry, I don't understand what you mean by "preserve the package
>>> > integrity?" Could you elaborate?
>>>
>>> After thinking the relaxation about the "new" unconfirmed input is not
>>> linked to trimming but I would say more to the multi-parent support.
>>>
>>> Let's say you have A+B trying to replace C+D where B is also spending
>>> already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
>>> input as D isn't spending E.
>>>
>>> So good, I think we agree on the problem description here.
>>>
>>> > I am in agreement with your calculations but unsure if we disagree on
>>> the
>>> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>>> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>>> B's,
>>> > it fails the proposed package RBF Rule #2, so this package would be
>>> > rejected. Does this meet your expectations?
>>>
>>> Well what sounds odd to me, in my example, we fail D even if it has a
>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>>> fees are 4500 sats ?
>>>
>>> Is this compatible with a model where a miner prioritizes absolute fees
>>> over ancestor score, in the case that mempools aren't full-enough to
>>> fulfill a block ?
>>>
>>> Let me know if I can clarify a point.
>>>
>>> Antoine
>>>
>>> Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a
>>> écrit :
>>>
>>>>
>>>> Hi Antoine,
>>>>
>>>> First of all, thank you for the thorough review. I appreciate your
>>>> insight on LN requirements.
>>>>
>>>> > IIUC, you have a package A+B+C submitted for acceptance and A is
>>>> already in your mempool. You trim out A from the package and then evaluate
>>>> B+C.
>>>>
>>>> > I think this might be an issue if A is the higher-fee element of the
>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>> be rejected, potentially breaking the acceptance expectations of the
>>>> package issuer ?
>>>>
>>>> Correct, if B+C is too low feerate to be accepted, we will reject it. I
>>>> prefer this because it is incentive compatible: A can be mined by itself,
>>>> so there's no reason to prefer A+B+C instead of A.
>>>> As another way of looking at this, consider the case where we do accept
>>>> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
>>>> capacity, we evict the lowest descendant feerate transactions, which are
>>>> B+C in this case. This gives us the same resulting mempool, with A and not
>>>> B+C.
>>>>
>>>>
>>>> > Further, I think the dedup should be done on wtxid, as you might have
>>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>>> different feerates.
>>>>
>>>> I agree that variations of the same package with different witnesses is
>>>> a case that must be handled. I consider witness replacement to be a project
>>>> that can be done in parallel to package mempool acceptance because being
>>>> able to accept packages does not worsen the problem of a
>>>> same-txid-different-witness "pinning" attack.
>>>>
>>>> If or when we have witness replacement, the logic is: if the individual
>>>> transaction is enough to replace the mempool one, the replacement will
>>>> happen during the preceding individual transaction acceptance, and
>>>> deduplication logic will work. Otherwise, we will try to deduplicate by
>>>> wtxid, see that we need a package witness replacement, and use the package
>>>> feerate to evaluate whether this is economically rational.
>>>>
>>>> See the #22290 "handle package transactions already in mempool" commit (
>>>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
>>>> which handles the case of same-txid-different-witness by simply using the
>>>> transaction in the mempool for now, with TODOs for what I just described.
>>>>
>>>>
>>>> > I'm not clearly understanding the accepted topologies. By "parent and
>>>> child to share a parent", do you mean the set of transactions A, B, C,
>>>> where B is spending A and C is spending A and B would be correct ?
>>>>
>>>> Yes, that is what I meant. Yes, that would a valid package under these
>>>> rules.
>>>>
>>>> > If yes, is there a width-limit introduced or we fallback on
>>>> MAX_PACKAGE_COUNT=25 ?
>>>>
>>>> No, there is no limit on connectivity other than "child with all
>>>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
>>>> in-mempool + in-package ancestor limits.
>>>>
>>>>
>>>> > Considering the current Core's mempool acceptance rules, I think CPFP
>>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>> jamming successful on one channel commitment transaction would contamine
>>>> the remaining commitments sharing the same package.
>>>>
>>>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>>> confirms in the following block,
>>>> the propagation and confirmation of B+C+D have been delayed. This could
>>>> carry on a loss of funds.
>>>>
>>>> Please note that A may replace A' even if A' has higher fees than A
>>>> individually, because the proposed package RBF utilizes the fees and size
>>>> of the entire package. This just requires E to pay enough fees, although
>>>> this can be pretty high if there are also potential B' and C' competing
>>>> commitment transactions that we don't know about.
>>>>
>>>>
>>>> > IMHO, I'm leaning towards deploying during a first phase
>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>> second-layer safety.
>>>>
>>>> So far, my understanding is that multi-parent-1-child is desired for
>>>> batched fee-bumping (
>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>>> and I've also seen your response which I have less context on (
>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>>> That being said, I am happy to create a new proposal for 1 parent + 1 child
>>>> (which would be slightly simpler) and plan for moving to
>>>> multi-parent-1-child later if that is preferred. I am very interested in
>>>> hearing feedback on that approach.
>>>>
>>>>
>>>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>> acceptance fails. For this reason I think the individual RBF should be
>>>> bypassed and only the package RBF apply ?
>>>>
>>>> I think there is a misunderstanding here - let me describe what I'm
>>>> proposing we'd do in this situation: we'll try individual submission for A,
>>>> see that it fails due to "insufficient fees." Then, we'll try package
>>>> validation for A+B and use package RBF. If A+B pays enough, it can still
>>>> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
>>>> this meet your expectations?
>>>>
>>>>
>>>> > What problem are you trying to solve by the package feerate *after*
>>>> dedup rule ?
>>>> > My understanding is that an in-package transaction might be already
>>>> in the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>> the vsize of this transaction could be discarded lowering the cost of
>>>> package RBF.
>>>>
>>>> I'm proposing that, when a transaction has already been submitted to
>>>> mempool, we would ignore both its fees and vsize when calculating package
>>>> feerate. In example G2, we shouldn't count M1 fees after its submission to
>>>> mempool, since M1's fees have already been used to pay for its individual
>>>> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
>>>> We also shouldn't count its vsize, since it has already been paid for.
>>>>
>>>>
>>>> > I think this is a footgunish API, as if a package issuer send the
>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>> higher-feerate/higher-fee package always replaces ?
>>>>
>>>> Note that, if C' conflicts with C, it also conflicts with D, since D is
>>>> a descendant of C and would thus need to be evicted along with it.
>>>> Implicitly, D' would not be in conflict with D.
>>>> More generally, this example is surprising to me because I didn't think
>>>> packages would be used to fee-bump replaceable transactions. Do we want the
>>>> child to be able to replace mempool transactions as well? This can be
>>>> implemented with a bit of additional logic.
>>>>
>>>> > I think this is unsafe for L2s if counterparties have malleability of
>>>> the child transaction. They can block your package replacement by
>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>> ability.
>>>>
>>>> I'm not sure what you mean? Let's say we have a package of parent A +
>>>> child B, where A is supposed to replace a mempool transaction A'. Are you
>>>> saying that counterparties are able to malleate the package child B, or a
>>>> child of A'? If they can malleate a child of A', that shouldn't matter as
>>>> long as A' is signaling replacement. This would be handled identically with
>>>> full RBF and what Core currently implements.
>>>>
>>>> > I think this is an issue brought by the trimming during the dedup
>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>
>>>> Sorry, I don't understand what you mean by "preserve the package
>>>> integrity?" Could you elaborate?
>>>>
>>>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>
>>>> > Package A + B ancestor score is 10 sat/vb.
>>>>
>>>> > D has a higher feerate/absolute fee than B.
>>>>
>>>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>
>>>> I am in agreement with your calculations but unsure if we disagree on
>>>> the expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>>>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
>>>> it fails the proposed package RBF Rule #2, so this package would be
>>>> rejected. Does this meet your expectations?
>>>>
>>>> Thank you for linking to projects that might be interested in package
>>>> relay :)
>>>>
>>>> Thanks,
>>>> Gloria
>>>>
>>>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <antoine.riard@gmail•com>
>>>> wrote:
>>>>
>>>>> Hi Gloria,
>>>>>
>>>>> > A package may contain transactions that are already in the mempool.
>>>>> We
>>>>> > remove
>>>>> > ("deduplicate") those transactions from the package for the purposes
>>>>> of
>>>>> > package
>>>>> > mempool acceptance. If a package is empty after deduplication, we do
>>>>> > nothing.
>>>>>
>>>>> IIUC, you have a package A+B+C submitted for acceptance and A is
>>>>> already in your mempool. You trim out A from the package and then evaluate
>>>>> B+C.
>>>>>
>>>>> I think this might be an issue if A is the higher-fee element of the
>>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>>> be rejected, potentially breaking the acceptance expectations of the
>>>>> package issuer ?
>>>>>
>>>>> Further, I think the dedup should be done on wtxid, as you might have
>>>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>>>> different feerates.
>>>>>
>>>>> E.g you're going to evaluate the package A+B and A' is already in your
>>>>> mempool with a bigger valid witness. You trim A based on txid, then you
>>>>> evaluate A'+B, which fails the fee checks. However, evaluating A+B would
>>>>> have been a success.
>>>>>
>>>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>>>>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>>>>> achieve the same goal by bypassing tx-level checks for already-in txn while
>>>>> conserving the package integrity for package-level checks ?
>>>>>
>>>>> > Note that it's possible for the parents to be
>>>>> > indirect
>>>>> > descendants/ancestors of one another, or for parent and child to
>>>>> share a
>>>>> > parent,
>>>>> > so we cannot make any other topology assumptions.
>>>>>
>>>>> I'm not clearly understanding the accepted topologies. By "parent and
>>>>> child to share a parent", do you mean the set of transactions A, B, C,
>>>>> where B is spending A and C is spending A and B would be correct ?
>>>>>
>>>>> If yes, is there a width-limit introduced or we fallback on
>>>>> MAX_PACKAGE_COUNT=25 ?
>>>>>
>>>>> IIRC, one rationale to come with this topology limitation was to lower
>>>>> the DoS risks when potentially deploying p2p packages.
>>>>>
>>>>> Considering the current Core's mempool acceptance rules, I think CPFP
>>>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>>> jamming successful on one channel commitment transaction would contamine
>>>>> the remaining commitments sharing the same package.
>>>>>
>>>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>>>> confirms in the following block,
>>>>> the propagation and confirmation of B+C+D have been delayed. This
>>>>> could carry on a loss of funds.
>>>>>
>>>>> That said, if you're broadcasting commitment transactions without
>>>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>>>> saving as you don't have to duplicate the CPFP.
>>>>>
>>>>> IMHO, I'm leaning towards deploying during a first phase
>>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>>> second-layer safety.
>>>>>
>>>>> > *Rationale*:  It would be incorrect to use the fees of transactions
>>>>> that are
>>>>> > already in the mempool, as we do not want a transaction's fees to be
>>>>> > double-counted for both its individual RBF and package RBF.
>>>>>
>>>>> I'm unsure about the logical order of the checks proposed.
>>>>>
>>>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>>> acceptance fails. For this reason I think the individual RBF should be
>>>>> bypassed and only the package RBF apply ?
>>>>>
>>>>> Note this situation is plausible, with current LN design, your
>>>>> counterparty can have a commitment transaction with a better fee just by
>>>>> selecting a higher `dust_limit_satoshis` than yours.
>>>>>
>>>>> > Examples F and G [14] show the same package, but P1 is submitted
>>>>> > individually before
>>>>> > the package in example G. In example F, we can see that the 300vB
>>>>> package
>>>>> > pays
>>>>> > an additional 200sat in fees, which is not enough to pay for its own
>>>>> > bandwidth
>>>>> > (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>> M1, but
>>>>> > using P1's fees again during package submission would make it look
>>>>> like a
>>>>> > 300sat
>>>>> > increase for a 200vB package. Even including its fees and size would
>>>>> not be
>>>>> > sufficient in this example, since the 300sat looks like enough for
>>>>> the 300vB
>>>>> > package. The calculcation after deduplication is 100sat increase for
>>>>> a
>>>>> > package
>>>>> > of size 200vB, which correctly fails BIP125#4. Assume all
>>>>> transactions have
>>>>> > a
>>>>> > size of 100vB.
>>>>>
>>>>> What problem are you trying to solve by the package feerate *after*
>>>>> dedup rule ?
>>>>>
>>>>> My understanding is that an in-package transaction might be already in
>>>>> the mempool. Therefore, to compute a correct RBF penalty replacement, the
>>>>> vsize of this transaction could be discarded lowering the cost of package
>>>>> RBF.
>>>>>
>>>>> If we keep a "safe" dedup mechanism (see my point above), I think this
>>>>> discount is justified, as the validation cost of node operators is paid for
>>>>> ?
>>>>>
>>>>> > The child cannot replace mempool transactions.
>>>>>
>>>>> Let's say you issue package A+B, then package C+B', where B' is a
>>>>> child of both A and C. This rule fails the acceptance of C+B' ?
>>>>>
>>>>> I think this is a footgunish API, as if a package issuer send the
>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>>>> in protocols where states are symmetric. E.g a malicious counterparty
>>>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>>>> fees.
>>>>>
>>>>> > All mempool transactions to be replaced must signal replaceability.
>>>>>
>>>>> I think this is unsafe for L2s if counterparties have malleability of
>>>>> the child transaction. They can block your package replacement by
>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>>> ability.
>>>>>
>>>>> I think it's better to either fix inherited signaling or move towards
>>>>> full-rbf.
>>>>>
>>>>> > if a package parent has already been submitted, it would
>>>>> > look
>>>>> >like the child is spending a "new" unconfirmed input.
>>>>>
>>>>> I think this is an issue brought by the trimming during the dedup
>>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>>
>>>>> > However, we still achieve the same goal of requiring the
>>>>> > replacement
>>>>> > transactions to have a ancestor score at least as high as the
>>>>> original
>>>>> > ones.
>>>>>
>>>>> I'm not sure if this holds...
>>>>>
>>>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>>
>>>>> Package A + B ancestor score is 10 sat/vb.
>>>>>
>>>>> D has a higher feerate/absolute fee than B.
>>>>>
>>>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>>> 1000 sats + D's 1500 sats) /
>>>>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>>
>>>>> Overall, this is a review through the lenses of LN requirements. I
>>>>> think other L2 protocols/applications
>>>>> could be candidates to using package accept/relay such as:
>>>>> * https://github.com/lightninglabs/pool
>>>>> * https://github.com/discreetlogcontracts/dlcspecs
>>>>> * https://github.com/bitcoin-teleport/teleport-transactions/
>>>>> * https://github.com/sapio-lang/sapio
>>>>> *
>>>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>>>>> * https://github.com/revault/practical-revault
>>>>>
>>>>> Thanks for rolling forward the ball on this subject.
>>>>>
>>>>> Antoine
>>>>>
>>>>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>>
>>>>>> Hi there,
>>>>>>
>>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>>> package
>>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>>> would not
>>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>>> significantly affects transaction propagation, I believe this is
>>>>>> relevant for
>>>>>> the mailing list.
>>>>>>
>>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>>> child. If you
>>>>>> develop software that relies on specific transaction relay
>>>>>> assumptions and/or
>>>>>> are interested in using package relay in the future, I'm very
>>>>>> interested to hear
>>>>>> your feedback on the utility or restrictiveness of these package
>>>>>> policies for
>>>>>> your use cases.
>>>>>>
>>>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>>>> PR#22290][1].
>>>>>>
>>>>>> An illustrated version of this post can be found at
>>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>>> I have also linked the images below.
>>>>>>
>>>>>> ## Background
>>>>>>
>>>>>> Feel free to skip this section if you are already familiar with
>>>>>> mempool policy
>>>>>> and package relay terminology.
>>>>>>
>>>>>> ### Terminology Clarifications
>>>>>>
>>>>>> * Package = an ordered list of related transactions, representable by
>>>>>> a Directed
>>>>>>   Acyclic Graph.
>>>>>> * Package Feerate = the total modified fees divided by the total
>>>>>> virtual size of
>>>>>>   all transactions in the package.
>>>>>>     - Modified fees = a transaction's base fees + fee delta applied
>>>>>> by the user
>>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>>> across
>>>>>> mempools.
>>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>>> [BIP141
>>>>>>       virtual size][2] and sigop weight. [Implemented here in Bitcoin
>>>>>> Core][3].
>>>>>>     - Note that feerate is not necessarily based on the base fees and
>>>>>> serialized
>>>>>>       size.
>>>>>>
>>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>>> incentives to
>>>>>>   boost a transaction's candidacy for inclusion in a block, including
>>>>>> Child Pays
>>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our
>>>>>> intention in
>>>>>> mempool policy is to recognize when the new transaction is more
>>>>>> economical to
>>>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>>>> some
>>>>>> limitations.
>>>>>>
>>>>>> ### Policy
>>>>>>
>>>>>> The purpose of the mempool is to store the best (to be most
>>>>>> incentive-compatible
>>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>>> Miners use
>>>>>> the mempool to build block templates. The mempool is also useful as a
>>>>>> cache for
>>>>>> boosting block relay and validation performance, aiding transaction
>>>>>> relay, and
>>>>>> generating feerate estimations.
>>>>>>
>>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>>> should make it
>>>>>> to miners through normal transaction relay, without any special
>>>>>> connectivity or
>>>>>> relationships with miners. On the other hand, nodes do not have
>>>>>> unlimited
>>>>>> resources, and a P2P network designed to let any honest node
>>>>>> broadcast their
>>>>>> transactions also exposes the transaction validation engine to DoS
>>>>>> attacks from
>>>>>> malicious peers.
>>>>>>
>>>>>> As such, for unconfirmed transactions we are considering for our
>>>>>> mempool, we
>>>>>> apply a set of validation rules in addition to consensus, primarily
>>>>>> to protect
>>>>>> us from resource exhaustion and aid our efforts to keep the highest
>>>>>> fee
>>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>>> node-specific) rules that transactions must abide by in order to be
>>>>>> accepted
>>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>>> restrictions such
>>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>>
>>>>>> ### Package Relay and Package Mempool Accept
>>>>>>
>>>>>> In transaction relay, we currently consider transactions one at a
>>>>>> time for
>>>>>> submission to the mempool. This creates a limitation in the node's
>>>>>> ability to
>>>>>> determine which transactions have the highest feerates, since we
>>>>>> cannot take
>>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>>> transactions are
>>>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>>>> when
>>>>>> considering it for RBF. When an individual transaction does not meet
>>>>>> the mempool
>>>>>> minimum feerate and the user isn't able to create a replacement
>>>>>> transaction
>>>>>> directly, it will not be accepted by mempools.
>>>>>>
>>>>>> This limitation presents a security issue for applications and users
>>>>>> relying on
>>>>>> time-sensitive transactions. For example, Lightning and other
>>>>>> protocols create
>>>>>> UTXOs with multiple spending paths, where one counterparty's spending
>>>>>> path opens
>>>>>> up after a timelock, and users are protected from cheating scenarios
>>>>>> as long as
>>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>>> parties'
>>>>>> transactions will propagate and confirm in a timely manner. This
>>>>>> assumption can
>>>>>> be broken if fee-bumping does not work as intended.
>>>>>>
>>>>>> The end goal for Package Relay is to consider multiple transactions
>>>>>> at the same
>>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>>> better
>>>>>> determine whether transactions should be accepted to our mempool,
>>>>>> especially if
>>>>>> they don't meet fee requirements individually or are better RBF
>>>>>> candidates as a
>>>>>> package. A combination of changes to mempool validation logic,
>>>>>> policy, and
>>>>>> transaction relay allows us to better propagate the transactions with
>>>>>> the
>>>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>>>> powerful
>>>>>> for users.
>>>>>>
>>>>>> The "relay" part of Package Relay suggests P2P messaging changes, but
>>>>>> a large
>>>>>> part of the changes are in the mempool's package validation logic. We
>>>>>> call this
>>>>>> *Package Mempool Accept*.
>>>>>>
>>>>>> ### Previous Work
>>>>>>
>>>>>> * Given that mempool validation is DoS-sensitive and complex, it
>>>>>> would be
>>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>>> efforts have
>>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>>> [#21062][5],
>>>>>> [#22675][6], [#22796][7]).
>>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>>> accepts only
>>>>>>   (no submission to mempool).
>>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks
>>>>>> for arbitrary
>>>>>>   packages. Still test accepts only.
>>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>>
>>>>>> ### Existing Package Rules
>>>>>>
>>>>>> These are in master as introduced in [#20833][8] and [#21800][9].
>>>>>> I'll consider
>>>>>> them as "given" in the rest of this document, though they can be
>>>>>> changed, since
>>>>>> package validation is test-accept only right now.
>>>>>>
>>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>>
>>>>>>    *Rationale*: This is already enforced as mempool
>>>>>> ancestor/descendant limits.
>>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>>> this limit
>>>>>> would mean that the package can either be split up or it wouldn't
>>>>>> pass this
>>>>>> mempool policy.
>>>>>>
>>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>>> between
>>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>>
>>>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>>>> can spend
>>>>>> the same inputs. This also means there cannot be duplicate
>>>>>> transactions. [8]
>>>>>>
>>>>>> 4. When packages are evaluated against ancestor/descendant limits in
>>>>>> a test
>>>>>> accept, the union of all of their descendants and ancestors is
>>>>>> considered. This
>>>>>> is essentially a "worst case" heuristic where every transaction in
>>>>>> the package
>>>>>> is treated as each other's ancestor and descendant. [8]
>>>>>> Packages for which ancestor/descendant limits are accurately captured
>>>>>> by this
>>>>>> heuristic: [19]
>>>>>>
>>>>>> There are also limitations such as the fact that CPFP carve out is
>>>>>> not applied
>>>>>> to package transactions. #20833 also disables RBF in package
>>>>>> validation; this
>>>>>> proposal overrides that to allow packages to use RBF.
>>>>>>
>>>>>> ## Proposed Changes
>>>>>>
>>>>>> The next step in the Package Mempool Accept project is to implement
>>>>>> submission
>>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>>> submission
>>>>>> logic before exposing it on P2P.
>>>>>>
>>>>>> ### Summary
>>>>>>
>>>>>> - Packages may contain already-in-mempool transactions.
>>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>>> - Fee-related checks use the package feerate. This means that wallets
>>>>>> can
>>>>>> create a package that utilizes CPFP.
>>>>>> - Parents are allowed to RBF mempool transactions with a set of rules
>>>>>> similar
>>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>>
>>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>>> feedback is
>>>>>> always welcome.
>>>>>>
>>>>>> ### Details
>>>>>>
>>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>>
>>>>>> A package may contain transactions that are already in the mempool.
>>>>>> We remove
>>>>>> ("deduplicate") those transactions from the package for the purposes
>>>>>> of package
>>>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>>>> nothing.
>>>>>>
>>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>>> parent to be
>>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>>> policy and
>>>>>> fee market fluctuations. We should not reject or penalize the entire
>>>>>> package for
>>>>>> an individual transaction as that could be a censorship vector.
>>>>>>
>>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>>
>>>>>> Only packages of a specific topology are permitted. Namely, a package
>>>>>> is exactly
>>>>>> 1 child with all of its unconfirmed parents. After deduplication, the
>>>>>> package
>>>>>> may be exactly the same, empty, 1 child, 1 child with just some of its
>>>>>> unconfirmed parents, etc. Note that it's possible for the parents to
>>>>>> be indirect
>>>>>> descendants/ancestors of one another, or for parent and child to
>>>>>> share a parent,
>>>>>> so we cannot make any other topology assumptions.
>>>>>>
>>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>>> parents
>>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>>> packages to a
>>>>>> defined topology is also easier to reason about and simplifies the
>>>>>> validation
>>>>>> logic greatly. Multi-parent-1-child allows us to think of the package
>>>>>> as one big
>>>>>> transaction, where:
>>>>>>
>>>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>>>> from
>>>>>>   confirmed UTXOs
>>>>>> - Outputs = all the outputs of the child + all outputs of the parents
>>>>>> that
>>>>>>   aren't spent by other transactions in the package
>>>>>>
>>>>>> Examples of packages that follow this rule (variations of example A
>>>>>> show some
>>>>>> possibilities after deduplication): ![image][15]
>>>>>>
>>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>>
>>>>>> Package Feerate = the total modified fees divided by the total
>>>>>> virtual size of
>>>>>> all transactions in the package.
>>>>>>
>>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>>> pre-configured
>>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>>> feerate, the
>>>>>> total package feerate is used instead of the individual feerate. The
>>>>>> individual
>>>>>> transactions are allowed to be below feerate requirements if the
>>>>>> package meets
>>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>>> can have 0
>>>>>> fees but be paid for by the child.
>>>>>>
>>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>>> solving the
>>>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>>>> applications to adjust their fees at broadcast time instead of
>>>>>> overshooting or
>>>>>> risking getting stuck/pinned.
>>>>>>
>>>>>> We use the package feerate of the package *after deduplication*.
>>>>>>
>>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>>> that are
>>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>>> double-counted for both its individual RBF and package RBF.
>>>>>>
>>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>>> individually before
>>>>>> the package in example G. In example F, we can see that the 300vB
>>>>>> package pays
>>>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>>>> bandwidth
>>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>>> M1, but
>>>>>> using P1's fees again during package submission would make it look
>>>>>> like a 300sat
>>>>>> increase for a 200vB package. Even including its fees and size would
>>>>>> not be
>>>>>> sufficient in this example, since the 300sat looks like enough for
>>>>>> the 300vB
>>>>>> package. The calculcation after deduplication is 100sat increase for
>>>>>> a package
>>>>>> of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>> transactions have a
>>>>>> size of 100vB.
>>>>>>
>>>>>> #### Package RBF
>>>>>>
>>>>>> If a package meets feerate requirements as a package, the parents in
>>>>>> the
>>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>>> child cannot
>>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>>> same
>>>>>> transaction, but in order to be valid, none of the transactions can
>>>>>> try to
>>>>>> replace an ancestor of another transaction in the same package (which
>>>>>> would thus
>>>>>> make its inputs unavailable).
>>>>>>
>>>>>> *Rationale*: Even if we are using package feerate, a package will not
>>>>>> propagate
>>>>>> as intended if RBF still requires each individual transaction to meet
>>>>>> the
>>>>>> feerate requirements.
>>>>>>
>>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>>
>>>>>> ##### Signaling (Rule #1)
>>>>>>
>>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>>
>>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>>> package RBF and
>>>>>> single transaction acceptance. This would be updated if single
>>>>>> transaction
>>>>>> validation moves to full RBF.
>>>>>>
>>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>>
>>>>>> A package may include new unconfirmed inputs, but the ancestor
>>>>>> feerate of the
>>>>>> child must be at least as high as the ancestor feerates of every
>>>>>> transaction
>>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>>> replacement
>>>>>> transaction may only include an unconfirmed input if that input was
>>>>>> included in
>>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>>> output from a
>>>>>> currently-unconfirmed transaction.)"
>>>>>>
>>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the replacement
>>>>>> transaction has a higher ancestor score than the original
>>>>>> transaction(s) (see
>>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>>> input can lower the
>>>>>> ancestor score of the replacement transaction. P1 is trying to
>>>>>> replace M1, and
>>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>>> and M2 pays
>>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>>> isolation, P1
>>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>>> M2, so its
>>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>>> ancestor
>>>>>> feerate, which is 6sat/vB.
>>>>>>
>>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>>> transactions in the package can spend new unconfirmed inputs."
>>>>>> Example J [17] shows
>>>>>> why, if any of the package transactions have ancestors, package
>>>>>> feerate is no
>>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1 (which
>>>>>> is the
>>>>>> replacement transaction in an RBF), we're actually interested in the
>>>>>> entire
>>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>>> P1, P2, and
>>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened to
>>>>>> only allow
>>>>>> the child to have new unconfirmed inputs, either, because it can
>>>>>> still cause us
>>>>>> to overestimate the package's ancestor score.
>>>>>>
>>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>>> Package RBF
>>>>>> less useful, but would also break Package RBF for packages with
>>>>>> parents already
>>>>>> in the mempool: if a package parent has already been submitted, it
>>>>>> would look
>>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>>> [18], we're
>>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>>> P3. We must
>>>>>> consider the case where one of the parents is already in the mempool
>>>>>> (in this
>>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>>> inputs. However,
>>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>>> replace M1
>>>>>> with this package.
>>>>>>
>>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>>> strict than
>>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>>> replacement
>>>>>> transactions to have a ancestor score at least as high as the
>>>>>> original ones. As
>>>>>> a result, the entire package is required to be a higher feerate
>>>>>> mining candidate
>>>>>> than each of the replaced transactions.
>>>>>>
>>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>>> original RBF
>>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>>
>>>>>> ##### Absolute Fee (Rule #3)
>>>>>>
>>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>>> total fees
>>>>>> of the package must be higher than the absolute fees of the mempool
>>>>>> transactions
>>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>>> BIP125 Rule #3
>>>>>> - an individual transaction in the package may have lower fees than
>>>>>> the
>>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and
>>>>>> the child
>>>>>> pays for RBF.
>>>>>>
>>>>>> ##### Feerate (Rule #4)
>>>>>>
>>>>>> The package must pay for its own bandwidth; the package feerate must
>>>>>> be higher
>>>>>> than the replaced transactions by at least minimum relay feerate
>>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>>> differs from
>>>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>>>> lower
>>>>>> feerate than the transaction(s) it is replacing. In fact, it may have
>>>>>> 0 fees,
>>>>>> and the child pays for RBF.
>>>>>>
>>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>>
>>>>>> The package cannot replace more than 100 mempool transactions. This
>>>>>> is identical
>>>>>> to BIP125 Rule #5.
>>>>>>
>>>>>> ### Expected FAQs
>>>>>>
>>>>>> 1. Is it possible for only some of the package to make it into the
>>>>>> mempool?
>>>>>>
>>>>>>    Yes, it is. However, since we evict transactions from the mempool
>>>>>> by
>>>>>> descendant score and the package child is supposed to be sponsoring
>>>>>> the fees of
>>>>>> its parents, the most common scenario would be all-or-nothing. This is
>>>>>> incentive-compatible. In fact, to be conservative, package validation
>>>>>> should
>>>>>> begin by trying to submit all of the transactions individually, and
>>>>>> only use the
>>>>>> package mempool acceptance logic if the parents fail due to low
>>>>>> feerate.
>>>>>>
>>>>>> 2. Should we allow packages to contain already-confirmed transactions?
>>>>>>
>>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>>> aren't able to
>>>>>> tell with 100% confidence if we are looking at a transaction that has
>>>>>> already
>>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>>> historical
>>>>>> block data, it's possible to look for it, but this is inefficient,
>>>>>> not always
>>>>>> possible for pruning nodes, and unnecessary because we're not going
>>>>>> to do
>>>>>> anything with the transaction anyway. As such, we already have the
>>>>>> expectation
>>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>>> relaying
>>>>>> transactions that have already been confirmed. Similarly, we
>>>>>> shouldn't be
>>>>>> relaying packages that contain already-confirmed transactions.
>>>>>>
>>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>>> [2]:
>>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>>> [3]:
>>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>>> [13]:
>>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>>> [14]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>>> [15]:
>>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>>> [16]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>>> [17]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>>> [18]:
>>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>>> [19]:
>>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>>> [20]:
>>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>>> _______________________________________________
>>>>>> bitcoin-dev mailing list
>>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>>
>>>>> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 69835 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-27  7:15           ` Bastien TEINTURIER
@ 2021-09-28 22:59             ` Antoine Riard
  2021-09-29 11:56               ` Gloria Zhao
  0 siblings, 1 reply; 16+ messages in thread
From: Antoine Riard @ 2021-09-28 22:59 UTC (permalink / raw)
  To: Bastien TEINTURIER; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 70469 bytes --]

Hi Bastien

> In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

I share the concern about splitting utxos into smaller ones.
IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your
counterparties attach a junk branch on her own anchor output, are you
allowed to chain your self-owned unconfirmed CPFP ?
I'm thinking about the topology "Chained CPFPs" exposed here :
https://github.com/rust-bitcoin/rust-lightning/issues/989.
Or if you have another L2 broadcast topology which could be safe w.r.t our
current mempool logic :) ?


Le lun. 27 sept. 2021 à 03:15, Bastien TEINTURIER <bastien@acinq•fr> a
écrit :

> I think we could restrain package acceptance to only confirmed inputs for
>> now and revisit later this point ? For LN-anchor, you can assume that the
>> fee-bumping UTXO feeding the CPFP is already
>> confirmed. Or are there currently-deployed use-cases which would benefit
>> from your proposed Rule #2 ?
>>
>
> I think constraining package acceptance to only confirmed inputs
> is very limiting and quite dangerous for L2 protocols.
>
> In the case of LN, an attacker can game this and heavily restrict
> your RBF attempts if you're only allowed to use confirmed inputs
> and have many channels (and a limited number of confirmed inputs).
> Otherwise you'll need node operators to pre-emptively split their
> utxos into many small utxos just for fee bumping, which is inefficient...
>
> Bastien
>
> Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>
>> Hi Gloria,
>>
>> Thanks for your answers,
>>
>> > In summary, it seems that the decisions that might still need
>> > attention/input from devs on this mailing list are:
>> > 1. Whether we should start with multiple-parent-1-child or
>> 1-parent-1-child.
>> > 2. Whether it's ok to require that the child not have conflicts with
>> > mempool transactions.
>>
>> Yes 1) it would be good to have inputs of more potential users of package
>> acceptance . And 2) I think it's more a matter of clearer wording of the
>> proposal.
>>
>> However, see my final point on the relaxation around "unconfirmed inputs"
>> which might in fact alter our current block construction strategy.
>>
>> > Right, the fact that we essentially always choose the first-seen
>> witness is
>> > an unfortunate limitation that exists already. Adding package mempool
>> > accept doesn't worsen this, but the procedure in the future is to
>> replace
>> > the witness when it makes sense economically. We can also add logic to
>> > allow package feerate to pay for witness replacements as well. This is
>> > pretty far into the future, though.
>>
>> Yes I agree package mempool doesn't worsen this. And it's not an issue
>> for current LN as you can't significantly inflate a spending witness for
>> the 2-of-2 funding output.
>> However, it might be an issue for multi-party protocol where the spending
>> script has alternative branches with asymmetric valid witness weights.
>> Taproot should ease that kind of script so hopefully we would deploy
>> wtxid-replacement not too far in the future.
>>
>> > I could be misunderstanding, but an attacker wouldn't be able to
>> > batch-attack like this. Alice's package only conflicts with A' + D',
>> not A'
>> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>
>> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
>> consider.
>>
>> In LN, if you're trying to confirm a commitment transaction to time-out
>> or claim on-chain a HTLC and the timelock is near-expiration, you should be
>> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
>> value offered by the HTLC.
>>
>> Following this security assumption, an attacker can exploit it by
>> targeting together commitment transactions from different channels by
>> blocking them under a high-fee child, of which the fee value
>> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
>> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
>> from observing mempools state, victims can't learn they're targeted by the
>> same attacker.
>>
>> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
>> + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
>> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
>> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
>> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
>> at loss on stealing P1, as she has paid more in fees but will realize a
>> gain on P2+P3.
>>
>> In this model, Alice is allowed to evict those 2 transactions (A' + D')
>> but as she is economically-bounded she won't succeed.
>>
>> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think
>> this 1st pinning scenario is correct and "lucractive" when you sum the
>> global gain/loss.
>>
>> There is a 2nd attack scenario where A + B + C + D, where D is the child
>> of A,B,C. All those transactions are honestly issued by Alice. Once A + B +
>> C + D are propagated in network mempools, Mallory is able to replace A + D
>> with  A' + D' where D' is paying a higher fee. This package A' + D' will
>> confirm soon if D feerate was compelling but Mallory succeeds in delaying
>> the confirmation
>> of B + C for one or more blocks. As B + C are pre-signed commitments with
>> a low-fee rate they won't confirm without Alice issuing a new child E.
>> Mallory can repeat the same trick by broadcasting
>> B' + E' and delay again the confirmation of C.
>>
>> If the remaining package pending HTLC has a higher-value than all the
>> malicious fees over-bid, Mallory should realize a gain. With this 2nd
>> pinning attack, the malicious entity buys confirmation delay of your
>> packaged-together commitments.
>>
>> Assuming those attacks are correct, I'm leaning towards being
>> conservative with the LDK broadcast backend. Though once again, other L2
>> devs have likely other use-cases and opinions :)
>>
>> >  B' only needs to pay for itself in this case.
>>
>> Yes I think it's a nice discount when UTXO is single-owned. In the
>> context of shared-owned UTXO (e.g LN), you might not if there is an
>> in-mempool package already spending the UTXO and have to assume the
>> worst-case scenario. I.e have B' committing enough fee to pay for A'
>> replacement bandwidth. I think we can't do that much for this case...
>>
>> > If a package meets feerate requirements as a
>> package, the parents in the transaction are allowed to replace-by-fee
>> mempool transactions. The child cannot replace mempool transactions."
>>
>> I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B'
>> to replace A+B because the first broadcast isn't satisfying anymore due to
>> mempool spikes ? Assuming B' fees is enough, I think that case as child B'
>> replacing in-mempool transaction B. Which I understand going against  "The
>> child cannot replace mempool transactions".
>>
>> Maybe wording could be a bit clearer ?
>>
>> > While it would be nice to have full RBF, malleability of the child won't
>> > block RBF here. If we're trying to replace A', we only require that A'
>> > signals replaceability, and don't mind if its child doesn't.
>>
>> Yes, it sounds good.
>>
>> > Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
>> miner
>> > should prefer to utilize their block space more effectively.
>>
>> If your mempool is empty and only composed of A+C+D or A+B, I think
>> taking A+C+D is the most efficient block construction you can come up with
>> as a miner ?
>>
>> > No, because we don't use that model.
>>
>> Can you describe what miner model we are using ? Like the block
>> construction strategy implemented by `addPackagesTxs` or also encompassing
>> our current mempool acceptance policy, which I think rely on absolute fee
>> over ancestor score in case of replacement ?
>>
>> I think this point is worthy to discuss as otherwise we might downgrade
>> the efficiency of our current block construction strategy in periods of
>> near-empty mempools. A knowledge which could be discreetly leveraged by a
>> miner to gain an advantage on the rest of the mining ecosystem.
>>
>> Note, I think we *might* have to go in this direction if we want to
>> replace replace-by-fee by replace-by-feerate or replace-by-ancestor and
>> solve in-depth pinning attacks. Though if we do so,
>> IMO we would need more thoughts.
>>
>> I think we could restrain package acceptance to only confirmed inputs for
>> now and revisit later this point ? For LN-anchor, you can assume that the
>> fee-bumping UTXO feeding the CPFP is already
>> confirmed. Or are there currently-deployed use-cases which would benefit
>> from your proposed Rule #2 ?
>>
>> Antoine
>>
>> Le jeu. 23 sept. 2021 à 11:36, Gloria Zhao <gloriajzhao@gmail•com> a
>> écrit :
>>
>>> Hi Antoine,
>>>
>>> Thanks as always for your input. I'm glad we agree on so much!
>>>
>>> In summary, it seems that the decisions that might still need
>>> attention/input from devs on this mailing list are:
>>> 1. Whether we should start with multiple-parent-1-child or
>>> 1-parent-1-child.
>>> 2. Whether it's ok to require that the child not have conflicts with
>>> mempool transactions.
>>>
>>> Responding to your comments...
>>>
>>> > IIUC, you have package A+B, during the dedup phase early in
>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>>> and A' is higher feerate than A, you trim A and replace by A' ?
>>>
>>> > I think this approach is safe, the one who appears unsafe to me is
>>> when A' has a _lower_ feerate, even if A' is already accepted by our
>>> mempool ? In that case iirc that would be a pinning.
>>>
>>> Right, the fact that we essentially always choose the first-seen witness
>>> is an unfortunate limitation that exists already. Adding package mempool
>>> accept doesn't worsen this, but the procedure in the future is to replace
>>> the witness when it makes sense economically. We can also add logic to
>>> allow package feerate to pay for witness replacements as well. This is
>>> pretty far into the future, though.
>>>
>>> > It sounds uneconomical for an attacker but I think it's not when you
>>> consider than you can "batch" attack against multiple honest
>>> counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
>>> with Alice's honest package P1, B' conflicts with Bob's honest package P2,
>>> C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
>>> A' + B' + C'.
>>>
>>> > If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of
>>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>>
>>> I could be misunderstanding, but an attacker wouldn't be able to
>>> batch-attack like this. Alice's package only conflicts with A' + D', not A'
>>> + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>>
>>> > Do we assume that broadcasted packages are "honest" by default and
>>> that the parent(s) always need the child to pass the fee checks, that way
>>> saving the processing of individual transactions which are expected to fail
>>> in 99% of cases or more ad hoc composition of packages at relay ?
>>> > I think this point is quite dependent on the p2p packages format/logic
>>> we'll end up on and that we should feel free to revisit it later ?
>>>
>>> I think it's the opposite; there's no way for us to assume that p2p
>>> packages will be "honest." I'd like to have two things before we expose on
>>> P2P: (1) ensure that the amount of resources potentially allocated for
>>> package validation isn't disproportionately higher than that of single
>>> transaction validation and (2) only use package validation when we're
>>> unsatisifed with the single validation result, e.g. we might get better
>>> fees.
>>> Yes, let's revisit this later :)
>>>
>>>  > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>>> discard its feerate as B should pay for all fees checked on its own. Where
>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>>> have a fee high enough to cover the bandwidth penalty replacement
>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>>
>>>  B' only needs to pay for itself in this case.
>>>
>>> > > Do we want the child to be able to replace mempool transactions as
>>> well?
>>>
>>> > If we mean when you have replaceable A+B then A'+B' try to replace
>>> with a higher-feerate ? I think that's exactly the case we need for
>>> Lightning as A+B is coming from Alice and A'+B' is coming from Bob :/
>>>
>>> Let me clarify this because I can see that my wording was ambiguous, and
>>> then please let me know if it fits Lightning's needs?
>>>
>>> In my proposal, I wrote "If a package meets feerate requirements as a
>>> package, the parents in the transaction are allowed to replace-by-fee
>>> mempool transactions. The child cannot replace mempool transactions." What
>>> I meant was: the package can replace mempool transactions if any of the
>>> parents conflict with mempool transactions. The child cannot not conflict
>>> with any mempool transactions.
>>> The Lightning use case this attempts to address is: Alice and Mallory
>>> are LN counterparties, and have packages A+B and A'+B', respectively. A and
>>> A' are their commitment transactions and conflict with each other; they
>>> have shared inputs and different txids.
>>> B spends Alice's anchor output from A. B' spends Mallory's anchor output
>>> from A'. Thus, B and B' do not conflict with each other.
>>> Alice can broadcast her package, A+B, to replace Mallory's package,
>>> A'+B', since B doesn't conflict with the mempool.
>>>
>>> Would this be ok?
>>>
>>> > The second option, a child of A', In the LN case I think the CPFP is
>>> attached on one's anchor output.
>>>
>>> While it would be nice to have full RBF, malleability of the child won't
>>> block RBF here. If we're trying to replace A', we only require that A'
>>> signals replaceability, and don't mind if its child doesn't.
>>>
>>> > > B has an ancestor score of 10sat/vb and D has an
>>> > > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>>> B's,
>>> > > it fails the proposed package RBF Rule #2, so this package would be
>>> > > rejected. Does this meet your expectations?
>>>
>>> > Well what sounds odd to me, in my example, we fail D even if it has a
>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>>> fees are 4500 sats ?
>>>
>>> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
>>> miner should prefer to utilize their block space more effectively.
>>>
>>> > Is this compatible with a model where a miner prioritizes absolute
>>> fees over ancestor score, in the case that mempools aren't full-enough to
>>> fulfill a block ?
>>>
>>> No, because we don't use that model.
>>>
>>> Thanks,
>>> Gloria
>>>
>>> On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail•com>
>>> wrote:
>>>
>>>> > Correct, if B+C is too low feerate to be accepted, we will reject it.
>>>> I
>>>> > prefer this because it is incentive compatible: A can be mined by
>>>> itself,
>>>> > so there's no reason to prefer A+B+C instead of A.
>>>> > As another way of looking at this, consider the case where we do
>>>> accept
>>>> > A+B+C and it sits at the "bottom" of our mempool. If our mempool
>>>> reaches
>>>> > capacity, we evict the lowest descendant feerate transactions, which
>>>> are
>>>> > B+C in this case. This gives us the same resulting mempool, with A
>>>> and not
>>>> > B+C.
>>>>
>>>> I agree here. Doing otherwise, we might evict other transactions
>>>> mempool in `MempoolAccept::Finalize` with a higher-feerate than B+C while
>>>> those evicted transactions are the most compelling for block construction.
>>>>
>>>> I thought at first missing this acceptance requirement would break a
>>>> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
>>>> attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
>>>> child fee is capturing the parent value. I can't think of other fee-bumping
>>>> schemes potentially affected. If they do exist I would say they're wrong in
>>>> their design assumptions.
>>>>
>>>> > If or when we have witness replacement, the logic is: if the
>>>> individual
>>>> > transaction is enough to replace the mempool one, the replacement will
>>>> > happen during the preceding individual transaction acceptance, and
>>>> > deduplication logic will work. Otherwise, we will try to deduplicate
>>>> by
>>>> > wtxid, see that we need a package witness replacement, and use the
>>>> package
>>>> > feerate to evaluate whether this is economically rational.
>>>>
>>>> IIUC, you have package A+B, during the dedup phase early in
>>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>>>> and A' is higher feerate than A, you trim A and replace by A' ?
>>>>
>>>> I think this approach is safe, the one who appears unsafe to me is when
>>>> A' has a _lower_ feerate, even if A' is already accepted by our mempool ?
>>>> In that case iirc that would be a pinning.
>>>>
>>>> Good to see progress on witness replacement before we see usage of
>>>> Taproot tree in the context of multi-party, where a malicious counterparty
>>>> inflates its witness to jam a honest spending.
>>>>
>>>> (Note, the commit linked currently points nowhere :))
>>>>
>>>>
>>>> > Please note that A may replace A' even if A' has higher fees than A
>>>> > individually, because the proposed package RBF utilizes the fees and
>>>> size
>>>> > of the entire package. This just requires E to pay enough fees,
>>>> although
>>>> > this can be pretty high if there are also potential B' and C'
>>>> competing
>>>> > commitment transactions that we don't know about.
>>>>
>>>> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for
>>>> the individual check on A, the honest package should replace the pinning
>>>> attempt. I've not fully parsed the proposed implementation yet.
>>>>
>>>> Though note, I think it's still unsafe for a Lightning
>>>> multi-commitment-broadcast-as-one-package as a malicious A' might have an
>>>> absolute fee higher than E. It sounds uneconomical for
>>>> an attacker but I think it's not when you consider than you can "batch"
>>>> attack against multiple honest counterparties. E.g, Mallory broadcast A' +
>>>> B' + C' + D' where A' conflicts with Alice's honest package P1, B'
>>>> conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
>>>> package P3. And D' is a high-fee child of A' + B' + C'.
>>>>
>>>> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of
>>>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>>>
>>>> > So far, my understanding is that multi-parent-1-child is desired for
>>>> > batched fee-bumping (
>>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>>> and
>>>> > I've also seen your response which I have less context on (
>>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>>> That
>>>> > being said, I am happy to create a new proposal for 1 parent + 1 child
>>>> > (which would be slightly simpler) and plan for moving to
>>>> > multi-parent-1-child later if that is preferred. I am very interested
>>>> in
>>>> > hearing feedback on that approach.
>>>>
>>>> I think batched fee-bumping is okay as long as you don't have
>>>> time-sensitive outputs encumbering your commitment transactions. For the
>>>> reasons mentioned above, I think that's unsafe.
>>>>
>>>> What I'm worried about is  L2 developers, potentially not aware about
>>>> all the mempool subtleties blurring the difference and always batching
>>>> their broadcast by default.
>>>>
>>>> IMO, a good thing by restraining to 1-parent + 1 child,  we
>>>> artificially constraint L2 design space for now and minimize risks of
>>>> unsafe usage of the package API :)
>>>>
>>>> I think that's a point where it would be relevant to have the opinion
>>>> of more L2 devs.
>>>>
>>>> > I think there is a misunderstanding here - let me describe what I'm
>>>> > proposing we'd do in this situation: we'll try individual submission
>>>> for A,
>>>> > see that it fails due to "insufficient fees." Then, we'll try package
>>>> > validation for A+B and use package RBF. If A+B pays enough, it can
>>>> still
>>>> > replace A'. If A fails for a bad signature, we won't look at B or
>>>> A+B. Does
>>>> > this meet your expectations?
>>>>
>>>> Yes there was a misunderstanding, I think this approach is correct,
>>>> it's more a question of performance. Do we assume that broadcasted packages
>>>> are "honest" by default and that the parent(s) always need the child to
>>>> pass the fee checks, that way saving the processing of individual
>>>> transactions which are expected to fail in 99% of cases or more ad hoc
>>>> composition of packages at relay ?
>>>>
>>>> I think this point is quite dependent on the p2p packages format/logic
>>>> we'll end up on and that we should feel free to revisit it later ?
>>>>
>>>>
>>>> > What problem are you trying to solve by the package feerate *after*
>>>> dedup
>>>> rule ?
>>>> > My understanding is that an in-package transaction might be already in
>>>> the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>> the
>>>> vsize of this transaction could be discarded lowering the cost of
>>>> package
>>>> RBF.
>>>>
>>>> > I'm proposing that, when a transaction has already been submitted to
>>>> > mempool, we would ignore both its fees and vsize when calculating
>>>> package
>>>> > feerate.
>>>>
>>>> Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>>>> discard its feerate as B should pay for all fees checked on its own. Where
>>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>>>> have a fee high enough to cover the bandwidth penalty replacement
>>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>>>
>>>> If you have a second-layer like current Lightning, you might have a
>>>> counterparty commitment to replace and should always expect to have to pay
>>>> for parent replacement bandwidth.
>>>>
>>>> Where a potential discount sounds interesting is when you have an
>>>> univoque state on the first-stage of transactions. E.g DLC's funding
>>>> transaction which might be CPFP by any participant iirc.
>>>>
>>>> > Note that, if C' conflicts with C, it also conflicts with D, since D
>>>> is a
>>>> > descendant of C and would thus need to be evicted along with it.
>>>>
>>>> Ah once again I think it's a misunderstanding without the code under my
>>>> eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e mark
>>>> for potential eviction D and don't consider it for future conflicts in the
>>>> rest of the package, I think D' `PreChecks` should be good ?
>>>>
>>>> > More generally, this example is surprising to me because I didn't
>>>> think
>>>> > packages would be used to fee-bump replaceable transactions. Do we
>>>> want the
>>>> > child to be able to replace mempool transactions as well?
>>>>
>>>> If we mean when you have replaceable A+B then A'+B' try to replace with
>>>> a higher-feerate ? I think that's exactly the case we need for Lightning as
>>>> A+B is coming from Alice and A'+B' is coming from Bob :/
>>>>
>>>> > I'm not sure what you mean? Let's say we have a package of parent A +
>>>> child
>>>> > B, where A is supposed to replace a mempool transaction A'. Are you
>>>> saying
>>>> > that counterparties are able to malleate the package child B, or a
>>>> child of
>>>> > A'?
>>>>
>>>> The second option, a child of A', In the LN case I think the CPFP is
>>>> attached on one's anchor output.
>>>>
>>>> I think it's good if we assume the
>>>> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
>>>> inherited signaling or full-rbf ?
>>>>
>>>> > Sorry, I don't understand what you mean by "preserve the package
>>>> > integrity?" Could you elaborate?
>>>>
>>>> After thinking the relaxation about the "new" unconfirmed input is not
>>>> linked to trimming but I would say more to the multi-parent support.
>>>>
>>>> Let's say you have A+B trying to replace C+D where B is also spending
>>>> already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
>>>> input as D isn't spending E.
>>>>
>>>> So good, I think we agree on the problem description here.
>>>>
>>>> > I am in agreement with your calculations but unsure if we disagree on
>>>> the
>>>> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has
>>>> an
>>>> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>>>> B's,
>>>> > it fails the proposed package RBF Rule #2, so this package would be
>>>> > rejected. Does this meet your expectations?
>>>>
>>>> Well what sounds odd to me, in my example, we fail D even if it has a
>>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>>>> fees are 4500 sats ?
>>>>
>>>> Is this compatible with a model where a miner prioritizes absolute fees
>>>> over ancestor score, in the case that mempools aren't full-enough to
>>>> fulfill a block ?
>>>>
>>>> Let me know if I can clarify a point.
>>>>
>>>> Antoine
>>>>
>>>> Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a
>>>> écrit :
>>>>
>>>>>
>>>>> Hi Antoine,
>>>>>
>>>>> First of all, thank you for the thorough review. I appreciate your
>>>>> insight on LN requirements.
>>>>>
>>>>> > IIUC, you have a package A+B+C submitted for acceptance and A is
>>>>> already in your mempool. You trim out A from the package and then evaluate
>>>>> B+C.
>>>>>
>>>>> > I think this might be an issue if A is the higher-fee element of the
>>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>>> be rejected, potentially breaking the acceptance expectations of the
>>>>> package issuer ?
>>>>>
>>>>> Correct, if B+C is too low feerate to be accepted, we will reject it.
>>>>> I prefer this because it is incentive compatible: A can be mined by itself,
>>>>> so there's no reason to prefer A+B+C instead of A.
>>>>> As another way of looking at this, consider the case where we do
>>>>> accept A+B+C and it sits at the "bottom" of our mempool. If our mempool
>>>>> reaches capacity, we evict the lowest descendant feerate transactions,
>>>>> which are B+C in this case. This gives us the same resulting mempool, with
>>>>> A and not B+C.
>>>>>
>>>>>
>>>>> > Further, I think the dedup should be done on wtxid, as you might
>>>>> have multiple valid witnesses. Though with varying vsizes and as such
>>>>> offering different feerates.
>>>>>
>>>>> I agree that variations of the same package with different witnesses
>>>>> is a case that must be handled. I consider witness replacement to be a
>>>>> project that can be done in parallel to package mempool acceptance because
>>>>> being able to accept packages does not worsen the problem of a
>>>>> same-txid-different-witness "pinning" attack.
>>>>>
>>>>> If or when we have witness replacement, the logic is: if the
>>>>> individual transaction is enough to replace the mempool one, the
>>>>> replacement will happen during the preceding individual transaction
>>>>> acceptance, and deduplication logic will work. Otherwise, we will try to
>>>>> deduplicate by wtxid, see that we need a package witness replacement, and
>>>>> use the package feerate to evaluate whether this is economically rational.
>>>>>
>>>>> See the #22290 "handle package transactions already in mempool" commit
>>>>> (
>>>>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
>>>>> which handles the case of same-txid-different-witness by simply using the
>>>>> transaction in the mempool for now, with TODOs for what I just described.
>>>>>
>>>>>
>>>>> > I'm not clearly understanding the accepted topologies. By "parent
>>>>> and child to share a parent", do you mean the set of transactions A, B, C,
>>>>> where B is spending A and C is spending A and B would be correct ?
>>>>>
>>>>> Yes, that is what I meant. Yes, that would a valid package under these
>>>>> rules.
>>>>>
>>>>> > If yes, is there a width-limit introduced or we fallback on
>>>>> MAX_PACKAGE_COUNT=25 ?
>>>>>
>>>>> No, there is no limit on connectivity other than "child with all
>>>>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
>>>>> in-mempool + in-package ancestor limits.
>>>>>
>>>>>
>>>>> > Considering the current Core's mempool acceptance rules, I think
>>>>> CPFP batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>>> jamming successful on one channel commitment transaction would contamine
>>>>> the remaining commitments sharing the same package.
>>>>>
>>>>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are
>>>>> commitment transactions and E a shared CPFP. If a malicious A' transaction
>>>>> has a better feerate than A, the whole package acceptance will fail. Even
>>>>> if A' confirms in the following block,
>>>>> the propagation and confirmation of B+C+D have been delayed. This
>>>>> could carry on a loss of funds.
>>>>>
>>>>> Please note that A may replace A' even if A' has higher fees than A
>>>>> individually, because the proposed package RBF utilizes the fees and size
>>>>> of the entire package. This just requires E to pay enough fees, although
>>>>> this can be pretty high if there are also potential B' and C' competing
>>>>> commitment transactions that we don't know about.
>>>>>
>>>>>
>>>>> > IMHO, I'm leaning towards deploying during a first phase
>>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>>> second-layer safety.
>>>>>
>>>>> So far, my understanding is that multi-parent-1-child is desired for
>>>>> batched fee-bumping (
>>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>>>> and I've also seen your response which I have less context on (
>>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>>>> That being said, I am happy to create a new proposal for 1 parent + 1 child
>>>>> (which would be slightly simpler) and plan for moving to
>>>>> multi-parent-1-child later if that is preferred. I am very interested in
>>>>> hearing feedback on that approach.
>>>>>
>>>>>
>>>>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>>> acceptance fails. For this reason I think the individual RBF should be
>>>>> bypassed and only the package RBF apply ?
>>>>>
>>>>> I think there is a misunderstanding here - let me describe what I'm
>>>>> proposing we'd do in this situation: we'll try individual submission for A,
>>>>> see that it fails due to "insufficient fees." Then, we'll try package
>>>>> validation for A+B and use package RBF. If A+B pays enough, it can still
>>>>> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
>>>>> this meet your expectations?
>>>>>
>>>>>
>>>>> > What problem are you trying to solve by the package feerate *after*
>>>>> dedup rule ?
>>>>> > My understanding is that an in-package transaction might be already
>>>>> in the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>>> the vsize of this transaction could be discarded lowering the cost of
>>>>> package RBF.
>>>>>
>>>>> I'm proposing that, when a transaction has already been submitted to
>>>>> mempool, we would ignore both its fees and vsize when calculating package
>>>>> feerate. In example G2, we shouldn't count M1 fees after its submission to
>>>>> mempool, since M1's fees have already been used to pay for its individual
>>>>> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
>>>>> We also shouldn't count its vsize, since it has already been paid for.
>>>>>
>>>>>
>>>>> > I think this is a footgunish API, as if a package issuer send the
>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>>> higher-feerate/higher-fee package always replaces ?
>>>>>
>>>>> Note that, if C' conflicts with C, it also conflicts with D, since D
>>>>> is a descendant of C and would thus need to be evicted along with it.
>>>>> Implicitly, D' would not be in conflict with D.
>>>>> More generally, this example is surprising to me because I didn't
>>>>> think packages would be used to fee-bump replaceable transactions. Do we
>>>>> want the child to be able to replace mempool transactions as well? This can
>>>>> be implemented with a bit of additional logic.
>>>>>
>>>>> > I think this is unsafe for L2s if counterparties have malleability
>>>>> of the child transaction. They can block your package replacement by
>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>>> ability.
>>>>>
>>>>> I'm not sure what you mean? Let's say we have a package of parent A +
>>>>> child B, where A is supposed to replace a mempool transaction A'. Are you
>>>>> saying that counterparties are able to malleate the package child B, or a
>>>>> child of A'? If they can malleate a child of A', that shouldn't matter as
>>>>> long as A' is signaling replacement. This would be handled identically with
>>>>> full RBF and what Core currently implements.
>>>>>
>>>>> > I think this is an issue brought by the trimming during the dedup
>>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>>
>>>>> Sorry, I don't understand what you mean by "preserve the package
>>>>> integrity?" Could you elaborate?
>>>>>
>>>>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>>
>>>>> > Package A + B ancestor score is 10 sat/vb.
>>>>>
>>>>> > D has a higher feerate/absolute fee than B.
>>>>>
>>>>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>>> 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>>
>>>>> I am in agreement with your calculations but unsure if we disagree on
>>>>> the expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>>>>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
>>>>> it fails the proposed package RBF Rule #2, so this package would be
>>>>> rejected. Does this meet your expectations?
>>>>>
>>>>> Thank you for linking to projects that might be interested in package
>>>>> relay :)
>>>>>
>>>>> Thanks,
>>>>> Gloria
>>>>>
>>>>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <
>>>>> antoine.riard@gmail•com> wrote:
>>>>>
>>>>>> Hi Gloria,
>>>>>>
>>>>>> > A package may contain transactions that are already in the mempool.
>>>>>> We
>>>>>> > remove
>>>>>> > ("deduplicate") those transactions from the package for the
>>>>>> purposes of
>>>>>> > package
>>>>>> > mempool acceptance. If a package is empty after deduplication, we do
>>>>>> > nothing.
>>>>>>
>>>>>> IIUC, you have a package A+B+C submitted for acceptance and A is
>>>>>> already in your mempool. You trim out A from the package and then evaluate
>>>>>> B+C.
>>>>>>
>>>>>> I think this might be an issue if A is the higher-fee element of the
>>>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>>>> be rejected, potentially breaking the acceptance expectations of the
>>>>>> package issuer ?
>>>>>>
>>>>>> Further, I think the dedup should be done on wtxid, as you might have
>>>>>> multiple valid witnesses. Though with varying vsizes and as such offering
>>>>>> different feerates.
>>>>>>
>>>>>> E.g you're going to evaluate the package A+B and A' is already in
>>>>>> your mempool with a bigger valid witness. You trim A based on txid, then
>>>>>> you evaluate A'+B, which fails the fee checks. However, evaluating A+B
>>>>>> would have been a success.
>>>>>>
>>>>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>>>>>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>>>>>> achieve the same goal by bypassing tx-level checks for already-in txn while
>>>>>> conserving the package integrity for package-level checks ?
>>>>>>
>>>>>> > Note that it's possible for the parents to be
>>>>>> > indirect
>>>>>> > descendants/ancestors of one another, or for parent and child to
>>>>>> share a
>>>>>> > parent,
>>>>>> > so we cannot make any other topology assumptions.
>>>>>>
>>>>>> I'm not clearly understanding the accepted topologies. By "parent and
>>>>>> child to share a parent", do you mean the set of transactions A, B, C,
>>>>>> where B is spending A and C is spending A and B would be correct ?
>>>>>>
>>>>>> If yes, is there a width-limit introduced or we fallback on
>>>>>> MAX_PACKAGE_COUNT=25 ?
>>>>>>
>>>>>> IIRC, one rationale to come with this topology limitation was to
>>>>>> lower the DoS risks when potentially deploying p2p packages.
>>>>>>
>>>>>> Considering the current Core's mempool acceptance rules, I think CPFP
>>>>>> batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>>>> jamming successful on one channel commitment transaction would contamine
>>>>>> the remaining commitments sharing the same package.
>>>>>>
>>>>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
>>>>>> transactions and E a shared CPFP. If a malicious A' transaction has a
>>>>>> better feerate than A, the whole package acceptance will fail. Even if A'
>>>>>> confirms in the following block,
>>>>>> the propagation and confirmation of B+C+D have been delayed. This
>>>>>> could carry on a loss of funds.
>>>>>>
>>>>>> That said, if you're broadcasting commitment transactions without
>>>>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>>>>> saving as you don't have to duplicate the CPFP.
>>>>>>
>>>>>> IMHO, I'm leaning towards deploying during a first phase
>>>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>>>> second-layer safety.
>>>>>>
>>>>>> > *Rationale*:  It would be incorrect to use the fees of transactions
>>>>>> that are
>>>>>> > already in the mempool, as we do not want a transaction's fees to be
>>>>>> > double-counted for both its individual RBF and package RBF.
>>>>>>
>>>>>> I'm unsure about the logical order of the checks proposed.
>>>>>>
>>>>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>>>> acceptance fails. For this reason I think the individual RBF should be
>>>>>> bypassed and only the package RBF apply ?
>>>>>>
>>>>>> Note this situation is plausible, with current LN design, your
>>>>>> counterparty can have a commitment transaction with a better fee just by
>>>>>> selecting a higher `dust_limit_satoshis` than yours.
>>>>>>
>>>>>> > Examples F and G [14] show the same package, but P1 is submitted
>>>>>> > individually before
>>>>>> > the package in example G. In example F, we can see that the 300vB
>>>>>> package
>>>>>> > pays
>>>>>> > an additional 200sat in fees, which is not enough to pay for its own
>>>>>> > bandwidth
>>>>>> > (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>>> M1, but
>>>>>> > using P1's fees again during package submission would make it look
>>>>>> like a
>>>>>> > 300sat
>>>>>> > increase for a 200vB package. Even including its fees and size
>>>>>> would not be
>>>>>> > sufficient in this example, since the 300sat looks like enough for
>>>>>> the 300vB
>>>>>> > package. The calculcation after deduplication is 100sat increase
>>>>>> for a
>>>>>> > package
>>>>>> > of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>> transactions have
>>>>>> > a
>>>>>> > size of 100vB.
>>>>>>
>>>>>> What problem are you trying to solve by the package feerate *after*
>>>>>> dedup rule ?
>>>>>>
>>>>>> My understanding is that an in-package transaction might be already
>>>>>> in the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>>>> the vsize of this transaction could be discarded lowering the cost of
>>>>>> package RBF.
>>>>>>
>>>>>> If we keep a "safe" dedup mechanism (see my point above), I think
>>>>>> this discount is justified, as the validation cost of node operators is
>>>>>> paid for ?
>>>>>>
>>>>>> > The child cannot replace mempool transactions.
>>>>>>
>>>>>> Let's say you issue package A+B, then package C+B', where B' is a
>>>>>> child of both A and C. This rule fails the acceptance of C+B' ?
>>>>>>
>>>>>> I think this is a footgunish API, as if a package issuer send the
>>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>>>>> in protocols where states are symmetric. E.g a malicious counterparty
>>>>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>>>>> fees.
>>>>>>
>>>>>> > All mempool transactions to be replaced must signal replaceability.
>>>>>>
>>>>>> I think this is unsafe for L2s if counterparties have malleability of
>>>>>> the child transaction. They can block your package replacement by
>>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>>>> ability.
>>>>>>
>>>>>> I think it's better to either fix inherited signaling or move towards
>>>>>> full-rbf.
>>>>>>
>>>>>> > if a package parent has already been submitted, it would
>>>>>> > look
>>>>>> >like the child is spending a "new" unconfirmed input.
>>>>>>
>>>>>> I think this is an issue brought by the trimming during the dedup
>>>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>>>
>>>>>> > However, we still achieve the same goal of requiring the
>>>>>> > replacement
>>>>>> > transactions to have a ancestor score at least as high as the
>>>>>> original
>>>>>> > ones.
>>>>>>
>>>>>> I'm not sure if this holds...
>>>>>>
>>>>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>>>
>>>>>> Package A + B ancestor score is 10 sat/vb.
>>>>>>
>>>>>> D has a higher feerate/absolute fee than B.
>>>>>>
>>>>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>>>> 1000 sats + D's 1500 sats) /
>>>>>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>>>
>>>>>> Overall, this is a review through the lenses of LN requirements. I
>>>>>> think other L2 protocols/applications
>>>>>> could be candidates to using package accept/relay such as:
>>>>>> * https://github.com/lightninglabs/pool
>>>>>> * https://github.com/discreetlogcontracts/dlcspecs
>>>>>> * https://github.com/bitcoin-teleport/teleport-transactions/
>>>>>> * https://github.com/sapio-lang/sapio
>>>>>> *
>>>>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>>>>>> * https://github.com/revault/practical-revault
>>>>>>
>>>>>> Thanks for rolling forward the ball on this subject.
>>>>>>
>>>>>> Antoine
>>>>>>
>>>>>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>>>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>>>
>>>>>>> Hi there,
>>>>>>>
>>>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>>>> package
>>>>>>> validation (in preparation for package relay) in Bitcoin Core. These
>>>>>>> would not
>>>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>>>> significantly affects transaction propagation, I believe this is
>>>>>>> relevant for
>>>>>>> the mailing list.
>>>>>>>
>>>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>>>> child. If you
>>>>>>> develop software that relies on specific transaction relay
>>>>>>> assumptions and/or
>>>>>>> are interested in using package relay in the future, I'm very
>>>>>>> interested to hear
>>>>>>> your feedback on the utility or restrictiveness of these package
>>>>>>> policies for
>>>>>>> your use cases.
>>>>>>>
>>>>>>> A draft implementation of this proposal can be found in [Bitcoin Core
>>>>>>> PR#22290][1].
>>>>>>>
>>>>>>> An illustrated version of this post can be found at
>>>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>>>> I have also linked the images below.
>>>>>>>
>>>>>>> ## Background
>>>>>>>
>>>>>>> Feel free to skip this section if you are already familiar with
>>>>>>> mempool policy
>>>>>>> and package relay terminology.
>>>>>>>
>>>>>>> ### Terminology Clarifications
>>>>>>>
>>>>>>> * Package = an ordered list of related transactions, representable
>>>>>>> by a Directed
>>>>>>>   Acyclic Graph.
>>>>>>> * Package Feerate = the total modified fees divided by the total
>>>>>>> virtual size of
>>>>>>>   all transactions in the package.
>>>>>>>     - Modified fees = a transaction's base fees + fee delta applied
>>>>>>> by the user
>>>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>>>> across
>>>>>>> mempools.
>>>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>>>> [BIP141
>>>>>>>       virtual size][2] and sigop weight. [Implemented here in
>>>>>>> Bitcoin Core][3].
>>>>>>>     - Note that feerate is not necessarily based on the base fees
>>>>>>> and serialized
>>>>>>>       size.
>>>>>>>
>>>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>>>> incentives to
>>>>>>>   boost a transaction's candidacy for inclusion in a block,
>>>>>>> including Child Pays
>>>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our
>>>>>>> intention in
>>>>>>> mempool policy is to recognize when the new transaction is more
>>>>>>> economical to
>>>>>>> mine than the original one(s) but not open DoS vectors, so there are
>>>>>>> some
>>>>>>> limitations.
>>>>>>>
>>>>>>> ### Policy
>>>>>>>
>>>>>>> The purpose of the mempool is to store the best (to be most
>>>>>>> incentive-compatible
>>>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>>>> Miners use
>>>>>>> the mempool to build block templates. The mempool is also useful as
>>>>>>> a cache for
>>>>>>> boosting block relay and validation performance, aiding transaction
>>>>>>> relay, and
>>>>>>> generating feerate estimations.
>>>>>>>
>>>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>>>> should make it
>>>>>>> to miners through normal transaction relay, without any special
>>>>>>> connectivity or
>>>>>>> relationships with miners. On the other hand, nodes do not have
>>>>>>> unlimited
>>>>>>> resources, and a P2P network designed to let any honest node
>>>>>>> broadcast their
>>>>>>> transactions also exposes the transaction validation engine to DoS
>>>>>>> attacks from
>>>>>>> malicious peers.
>>>>>>>
>>>>>>> As such, for unconfirmed transactions we are considering for our
>>>>>>> mempool, we
>>>>>>> apply a set of validation rules in addition to consensus, primarily
>>>>>>> to protect
>>>>>>> us from resource exhaustion and aid our efforts to keep the highest
>>>>>>> fee
>>>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>>>> node-specific) rules that transactions must abide by in order to be
>>>>>>> accepted
>>>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>>>> restrictions such
>>>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>>>
>>>>>>> ### Package Relay and Package Mempool Accept
>>>>>>>
>>>>>>> In transaction relay, we currently consider transactions one at a
>>>>>>> time for
>>>>>>> submission to the mempool. This creates a limitation in the node's
>>>>>>> ability to
>>>>>>> determine which transactions have the highest feerates, since we
>>>>>>> cannot take
>>>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>>>> transactions are
>>>>>>> in the mempool. Similarly, we cannot use a transaction's descendants
>>>>>>> when
>>>>>>> considering it for RBF. When an individual transaction does not meet
>>>>>>> the mempool
>>>>>>> minimum feerate and the user isn't able to create a replacement
>>>>>>> transaction
>>>>>>> directly, it will not be accepted by mempools.
>>>>>>>
>>>>>>> This limitation presents a security issue for applications and users
>>>>>>> relying on
>>>>>>> time-sensitive transactions. For example, Lightning and other
>>>>>>> protocols create
>>>>>>> UTXOs with multiple spending paths, where one counterparty's
>>>>>>> spending path opens
>>>>>>> up after a timelock, and users are protected from cheating scenarios
>>>>>>> as long as
>>>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>>>> parties'
>>>>>>> transactions will propagate and confirm in a timely manner. This
>>>>>>> assumption can
>>>>>>> be broken if fee-bumping does not work as intended.
>>>>>>>
>>>>>>> The end goal for Package Relay is to consider multiple transactions
>>>>>>> at the same
>>>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>>>> better
>>>>>>> determine whether transactions should be accepted to our mempool,
>>>>>>> especially if
>>>>>>> they don't meet fee requirements individually or are better RBF
>>>>>>> candidates as a
>>>>>>> package. A combination of changes to mempool validation logic,
>>>>>>> policy, and
>>>>>>> transaction relay allows us to better propagate the transactions
>>>>>>> with the
>>>>>>> highest package feerates to miners, and makes fee-bumping tools more
>>>>>>> powerful
>>>>>>> for users.
>>>>>>>
>>>>>>> The "relay" part of Package Relay suggests P2P messaging changes,
>>>>>>> but a large
>>>>>>> part of the changes are in the mempool's package validation logic.
>>>>>>> We call this
>>>>>>> *Package Mempool Accept*.
>>>>>>>
>>>>>>> ### Previous Work
>>>>>>>
>>>>>>> * Given that mempool validation is DoS-sensitive and complex, it
>>>>>>> would be
>>>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>>>> efforts have
>>>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>>>> [#21062][5],
>>>>>>> [#22675][6], [#22796][7]).
>>>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>>>> accepts only
>>>>>>>   (no submission to mempool).
>>>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks
>>>>>>> for arbitrary
>>>>>>>   packages. Still test accepts only.
>>>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>>>
>>>>>>> ### Existing Package Rules
>>>>>>>
>>>>>>> These are in master as introduced in [#20833][8] and [#21800][9].
>>>>>>> I'll consider
>>>>>>> them as "given" in the rest of this document, though they can be
>>>>>>> changed, since
>>>>>>> package validation is test-accept only right now.
>>>>>>>
>>>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>>>
>>>>>>>    *Rationale*: This is already enforced as mempool
>>>>>>> ancestor/descendant limits.
>>>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>>>> this limit
>>>>>>> would mean that the package can either be split up or it wouldn't
>>>>>>> pass this
>>>>>>> mempool policy.
>>>>>>>
>>>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>>>> between
>>>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>>>
>>>>>>> 3. A package cannot have conflicting transactions, i.e. none of them
>>>>>>> can spend
>>>>>>> the same inputs. This also means there cannot be duplicate
>>>>>>> transactions. [8]
>>>>>>>
>>>>>>> 4. When packages are evaluated against ancestor/descendant limits in
>>>>>>> a test
>>>>>>> accept, the union of all of their descendants and ancestors is
>>>>>>> considered. This
>>>>>>> is essentially a "worst case" heuristic where every transaction in
>>>>>>> the package
>>>>>>> is treated as each other's ancestor and descendant. [8]
>>>>>>> Packages for which ancestor/descendant limits are accurately
>>>>>>> captured by this
>>>>>>> heuristic: [19]
>>>>>>>
>>>>>>> There are also limitations such as the fact that CPFP carve out is
>>>>>>> not applied
>>>>>>> to package transactions. #20833 also disables RBF in package
>>>>>>> validation; this
>>>>>>> proposal overrides that to allow packages to use RBF.
>>>>>>>
>>>>>>> ## Proposed Changes
>>>>>>>
>>>>>>> The next step in the Package Mempool Accept project is to implement
>>>>>>> submission
>>>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>>>> submission
>>>>>>> logic before exposing it on P2P.
>>>>>>>
>>>>>>> ### Summary
>>>>>>>
>>>>>>> - Packages may contain already-in-mempool transactions.
>>>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>>>> - Fee-related checks use the package feerate. This means that
>>>>>>> wallets can
>>>>>>> create a package that utilizes CPFP.
>>>>>>> - Parents are allowed to RBF mempool transactions with a set of
>>>>>>> rules similar
>>>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>>>
>>>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>>>> feedback is
>>>>>>> always welcome.
>>>>>>>
>>>>>>> ### Details
>>>>>>>
>>>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>>>
>>>>>>> A package may contain transactions that are already in the mempool.
>>>>>>> We remove
>>>>>>> ("deduplicate") those transactions from the package for the purposes
>>>>>>> of package
>>>>>>> mempool acceptance. If a package is empty after deduplication, we do
>>>>>>> nothing.
>>>>>>>
>>>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>>>> parent to be
>>>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>>>> policy and
>>>>>>> fee market fluctuations. We should not reject or penalize the entire
>>>>>>> package for
>>>>>>> an individual transaction as that could be a censorship vector.
>>>>>>>
>>>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>>>
>>>>>>> Only packages of a specific topology are permitted. Namely, a
>>>>>>> package is exactly
>>>>>>> 1 child with all of its unconfirmed parents. After deduplication,
>>>>>>> the package
>>>>>>> may be exactly the same, empty, 1 child, 1 child with just some of
>>>>>>> its
>>>>>>> unconfirmed parents, etc. Note that it's possible for the parents to
>>>>>>> be indirect
>>>>>>> descendants/ancestors of one another, or for parent and child to
>>>>>>> share a parent,
>>>>>>> so we cannot make any other topology assumptions.
>>>>>>>
>>>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>>>> parents
>>>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>>>> packages to a
>>>>>>> defined topology is also easier to reason about and simplifies the
>>>>>>> validation
>>>>>>> logic greatly. Multi-parent-1-child allows us to think of the
>>>>>>> package as one big
>>>>>>> transaction, where:
>>>>>>>
>>>>>>> - Inputs = all the inputs of parents + inputs of the child that come
>>>>>>> from
>>>>>>>   confirmed UTXOs
>>>>>>> - Outputs = all the outputs of the child + all outputs of the
>>>>>>> parents that
>>>>>>>   aren't spent by other transactions in the package
>>>>>>>
>>>>>>> Examples of packages that follow this rule (variations of example A
>>>>>>> show some
>>>>>>> possibilities after deduplication): ![image][15]
>>>>>>>
>>>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>>>
>>>>>>> Package Feerate = the total modified fees divided by the total
>>>>>>> virtual size of
>>>>>>> all transactions in the package.
>>>>>>>
>>>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>>>> pre-configured
>>>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>>>> feerate, the
>>>>>>> total package feerate is used instead of the individual feerate. The
>>>>>>> individual
>>>>>>> transactions are allowed to be below feerate requirements if the
>>>>>>> package meets
>>>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>>>> can have 0
>>>>>>> fees but be paid for by the child.
>>>>>>>
>>>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>>>> solving the
>>>>>>> issue of a parent not meeting minimum fees on its own. This allows L2
>>>>>>> applications to adjust their fees at broadcast time instead of
>>>>>>> overshooting or
>>>>>>> risking getting stuck/pinned.
>>>>>>>
>>>>>>> We use the package feerate of the package *after deduplication*.
>>>>>>>
>>>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>>>> that are
>>>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>>>> double-counted for both its individual RBF and package RBF.
>>>>>>>
>>>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>>>> individually before
>>>>>>> the package in example G. In example F, we can see that the 300vB
>>>>>>> package pays
>>>>>>> an additional 200sat in fees, which is not enough to pay for its own
>>>>>>> bandwidth
>>>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>>>> M1, but
>>>>>>> using P1's fees again during package submission would make it look
>>>>>>> like a 300sat
>>>>>>> increase for a 200vB package. Even including its fees and size would
>>>>>>> not be
>>>>>>> sufficient in this example, since the 300sat looks like enough for
>>>>>>> the 300vB
>>>>>>> package. The calculcation after deduplication is 100sat increase for
>>>>>>> a package
>>>>>>> of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>>> transactions have a
>>>>>>> size of 100vB.
>>>>>>>
>>>>>>> #### Package RBF
>>>>>>>
>>>>>>> If a package meets feerate requirements as a package, the parents in
>>>>>>> the
>>>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>>>> child cannot
>>>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>>>> same
>>>>>>> transaction, but in order to be valid, none of the transactions can
>>>>>>> try to
>>>>>>> replace an ancestor of another transaction in the same package
>>>>>>> (which would thus
>>>>>>> make its inputs unavailable).
>>>>>>>
>>>>>>> *Rationale*: Even if we are using package feerate, a package will
>>>>>>> not propagate
>>>>>>> as intended if RBF still requires each individual transaction to
>>>>>>> meet the
>>>>>>> feerate requirements.
>>>>>>>
>>>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>>>
>>>>>>> ##### Signaling (Rule #1)
>>>>>>>
>>>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>>>
>>>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>>>> package RBF and
>>>>>>> single transaction acceptance. This would be updated if single
>>>>>>> transaction
>>>>>>> validation moves to full RBF.
>>>>>>>
>>>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>>>
>>>>>>> A package may include new unconfirmed inputs, but the ancestor
>>>>>>> feerate of the
>>>>>>> child must be at least as high as the ancestor feerates of every
>>>>>>> transaction
>>>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>>>> replacement
>>>>>>> transaction may only include an unconfirmed input if that input was
>>>>>>> included in
>>>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>>>> output from a
>>>>>>> currently-unconfirmed transaction.)"
>>>>>>>
>>>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the
>>>>>>> replacement
>>>>>>> transaction has a higher ancestor score than the original
>>>>>>> transaction(s) (see
>>>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>>>> input can lower the
>>>>>>> ancestor score of the replacement transaction. P1 is trying to
>>>>>>> replace M1, and
>>>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>>>> and M2 pays
>>>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>>>> isolation, P1
>>>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>>>> M2, so its
>>>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>>>> ancestor
>>>>>>> feerate, which is 6sat/vB.
>>>>>>>
>>>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>>>> transactions in the package can spend new unconfirmed inputs."
>>>>>>> Example J [17] shows
>>>>>>> why, if any of the package transactions have ancestors, package
>>>>>>> feerate is no
>>>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1
>>>>>>> (which is the
>>>>>>> replacement transaction in an RBF), we're actually interested in the
>>>>>>> entire
>>>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>>>> P1, P2, and
>>>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened
>>>>>>> to only allow
>>>>>>> the child to have new unconfirmed inputs, either, because it can
>>>>>>> still cause us
>>>>>>> to overestimate the package's ancestor score.
>>>>>>>
>>>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>>>> Package RBF
>>>>>>> less useful, but would also break Package RBF for packages with
>>>>>>> parents already
>>>>>>> in the mempool: if a package parent has already been submitted, it
>>>>>>> would look
>>>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>>>> [18], we're
>>>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>>>> P3. We must
>>>>>>> consider the case where one of the parents is already in the mempool
>>>>>>> (in this
>>>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>>>> inputs. However,
>>>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>>>> replace M1
>>>>>>> with this package.
>>>>>>>
>>>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>>>> strict than
>>>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>>>> replacement
>>>>>>> transactions to have a ancestor score at least as high as the
>>>>>>> original ones. As
>>>>>>> a result, the entire package is required to be a higher feerate
>>>>>>> mining candidate
>>>>>>> than each of the replaced transactions.
>>>>>>>
>>>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>>>> original RBF
>>>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>>>
>>>>>>> ##### Absolute Fee (Rule #3)
>>>>>>>
>>>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>>>> total fees
>>>>>>> of the package must be higher than the absolute fees of the mempool
>>>>>>> transactions
>>>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>>>> BIP125 Rule #3
>>>>>>> - an individual transaction in the package may have lower fees than
>>>>>>> the
>>>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and
>>>>>>> the child
>>>>>>> pays for RBF.
>>>>>>>
>>>>>>> ##### Feerate (Rule #4)
>>>>>>>
>>>>>>> The package must pay for its own bandwidth; the package feerate must
>>>>>>> be higher
>>>>>>> than the replaced transactions by at least minimum relay feerate
>>>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>>>> differs from
>>>>>>> BIP125 Rule #4 - an individual transaction in the package can have a
>>>>>>> lower
>>>>>>> feerate than the transaction(s) it is replacing. In fact, it may
>>>>>>> have 0 fees,
>>>>>>> and the child pays for RBF.
>>>>>>>
>>>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>>>
>>>>>>> The package cannot replace more than 100 mempool transactions. This
>>>>>>> is identical
>>>>>>> to BIP125 Rule #5.
>>>>>>>
>>>>>>> ### Expected FAQs
>>>>>>>
>>>>>>> 1. Is it possible for only some of the package to make it into the
>>>>>>> mempool?
>>>>>>>
>>>>>>>    Yes, it is. However, since we evict transactions from the mempool
>>>>>>> by
>>>>>>> descendant score and the package child is supposed to be sponsoring
>>>>>>> the fees of
>>>>>>> its parents, the most common scenario would be all-or-nothing. This
>>>>>>> is
>>>>>>> incentive-compatible. In fact, to be conservative, package
>>>>>>> validation should
>>>>>>> begin by trying to submit all of the transactions individually, and
>>>>>>> only use the
>>>>>>> package mempool acceptance logic if the parents fail due to low
>>>>>>> feerate.
>>>>>>>
>>>>>>> 2. Should we allow packages to contain already-confirmed
>>>>>>> transactions?
>>>>>>>
>>>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>>>> aren't able to
>>>>>>> tell with 100% confidence if we are looking at a transaction that
>>>>>>> has already
>>>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>>>> historical
>>>>>>> block data, it's possible to look for it, but this is inefficient,
>>>>>>> not always
>>>>>>> possible for pruning nodes, and unnecessary because we're not going
>>>>>>> to do
>>>>>>> anything with the transaction anyway. As such, we already have the
>>>>>>> expectation
>>>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>>>> relaying
>>>>>>> transactions that have already been confirmed. Similarly, we
>>>>>>> shouldn't be
>>>>>>> relaying packages that contain already-confirmed transactions.
>>>>>>>
>>>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>>>> [2]:
>>>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>>>> [3]:
>>>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>>>> [12]: https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>>>> [13]:
>>>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>>>> [14]:
>>>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>>>> [15]:
>>>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>>>> [16]:
>>>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>>>> [17]:
>>>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>>>> [18]:
>>>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>>>> [19]:
>>>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>>>> [20]:
>>>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>>>> _______________________________________________
>>>>>>> bitcoin-dev mailing list
>>>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>>>
>>>>>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>

[-- Attachment #2: Type: text/html, Size: 71253 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-28 22:59             ` Antoine Riard
@ 2021-09-29 11:56               ` Gloria Zhao
  0 siblings, 0 replies; 16+ messages in thread
From: Gloria Zhao @ 2021-09-29 11:56 UTC (permalink / raw)
  To: Antoine Riard; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 78335 bytes --]

Hi Antoine and Bastien,

> Yes 1) it would be good to have inputs of more potential users of package
acceptance . And 2) I think it's more a matter of clearer wording of the
proposal.

(1) I'm leaning towards multi-parent-1-child and offering [#22674][0] up
for review. If somebody feels very strongly about 1-parent-1-child, please
let me know.

(2) I'm glad this turned out to just be a wording problem. I've updated the
proposal to [say][1] "If it meets feerate requirements, the package can
replace mempool transactions if any of the parents conflict with mempool
transactions. The child cannot conflict with any mempool transactions."
Hopefully that is more *univoque*.

Side note: I've also updated the proposal to contain a [section][2] on why
submitting transactions individually before package validation is
incentive-compatible. I think it's relevant to our conversation, but for
those who just want to _use_ packages, it's just an implementation detail.

On restricting packages to confirmed inputs only:

> I think we could restrain package acceptance to only confirmed inputs for
now and revisit later this point ? For LN-anchor, you can assume that the
fee-bumping UTXO feeding the CPFP is already
confirmed. Or are there currently-deployed use-cases which would benefit
from your proposed Rule #2 ?

I thought about this a lot this week, and wrote up a summary of why I don't
think BIP125#2 helps us at all [here][3] on #23121. I see that you've
already come across it :)

> IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your
counterparties attach a junk branch on her own anchor output, are you
allowed to chain your self-owned unconfirmed CPFP ?

Yes, if your counterparty attaches a bunch of descendants to their anchor
output to dominate the descendant limit of your shared commitment
transaction, CPFP carve out allows you to add 1 extra transaction under
10KvB to your own anchor output. It's fine if it spends an unconfirmed
input, as long as you aren't exceeding the descendant limits of that
transaction. This shouldn't be the case; I think something is seriously
wrong if all of your UTXOs are tied up in mempool transactions with big
ancestor/descendant trees.

I don't know much about L2 development so I'm just going to quote this:

> I think constraining package acceptance to only confirmed inputs is very
limiting and quite dangerous for L2 protocols.

Since the restriction isn't helpful in simplifying the mempool code, makes
things more complicated for application developers, and can be dangerous
for L2, I'd prefer not to add this restriction for packages.

On Antoine's question about our miner model:

> Can you describe what miner model we are using ? Like the block
construction strategy implemented by `addPackagesTxs` or also encompassing
our current mempool acceptance policy, which I think rely on absolute fee
over ancestor score in case of replacement ?

Our current model for block construction is this: we sort our mempool by
package ancestor score (total modified fees of a tx and its unconfirmed
ancestors / total vsize as seen by our mempool) and add packages to a block
until it's full. That's not to say this is the perfect miner policy, but
mempool acceptance logic follows this model as closely as possible because
it is, fundamentally, a cache that aids in block assembly performance. As
another way of looking at this, imagine if our mempool was so small it
could only store ~1 block's worth of transactions. It should always try to
keep the highest-fees-within-1-block transactions, and obviously wouldn't
evict small-but-valuable transations in favor or giant ones paying mediocre
feerates. All fee-related mempool policies, including RBF, consider
feerate. BIP125#3 is a rule on absolute fees, but it is always combined
with BIP125#4, a rule on feerates. AFAIK, the reason it doesn't use
ancestor score is that information wasn't cached in mempool entries at the
time, and thus not readily available to use in mempool validation.

That's why I don't think this is relevant to package validation. Commenting
on the model itself:

> Is this compatible with a model where a miner prioritizes absolute fees
over ancestor score, in the case that mempools aren't full-enough to
fulfill a block ?
>> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
miner should prefer to utilize their block space more effectively.
> If your mempool is empty and only composed of A+C+D or A+B, I think
taking A+C+D is the most efficient block construction you can come up with
as a miner ?
> I think this point is worthy to discuss as otherwise we might downgrade
the efficiency of our current block construction strategy in periods of
near-empty mempools. A knowledge which could be discreetly leveraged by a
miner to gain an advantage on the rest of the mining ecosystem.

I believe this is suggesting "if our mempool has so few transactions that
it wouldn't reach block capacity, prioritize any increase in absolute fees,
even if the feerate is lower." I can see how this may result in a
higher-fee block in a specific scenario such as the one highlighted above,
but I don't think it is a sound model in general. It would be impossible to
tell when we should use this model: we could simply be in IBD, restarted a
node with an old/empty mempool.dat, and even if it's a
low-transaction-volume time, we never know what transactions will trickle
in between now and the next block. Going back to the tiny 1-block mempool
scenario, i.e., if you _never_ wanted to keep transactions that you
wouldn't put in the next block, would you ever switch strategies?

Thanks again to everyone who's given their attention to the package mempool
accept proposal.

Best,
Gloria

[0]: https://github.com/bitcoin/bitcoin/pull/22674
[1]:
https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#package-rbf
[2]:
https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#always-try-individual-submission-first
[3]: https://github.com/bitcoin/bitcoin/pull/23121#issuecomment-929475999

On Tue, Sep 28, 2021 at 11:59 PM Antoine Riard <antoine.riard@gmail•com>
wrote:

> Hi Bastien
>
> > In the case of LN, an attacker can game this and heavily restrict
> your RBF attempts if you're only allowed to use confirmed inputs
> and have many channels (and a limited number of confirmed inputs).
> Otherwise you'll need node operators to pre-emptively split their
> utxos into many small utxos just for fee bumping, which is inefficient...
>
> I share the concern about splitting utxos into smaller ones.
> IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your
> counterparties attach a junk branch on her own anchor output, are you
> allowed to chain your self-owned unconfirmed CPFP ?
> I'm thinking about the topology "Chained CPFPs" exposed here :
> https://github.com/rust-bitcoin/rust-lightning/issues/989.
> Or if you have another L2 broadcast topology which could be safe w.r.t our
> current mempool logic :) ?
>
>
> Le lun. 27 sept. 2021 à 03:15, Bastien TEINTURIER <bastien@acinq•fr> a
> écrit :
>
>> I think we could restrain package acceptance to only confirmed inputs for
>>> now and revisit later this point ? For LN-anchor, you can assume that the
>>> fee-bumping UTXO feeding the CPFP is already
>>> confirmed. Or are there currently-deployed use-cases which would benefit
>>> from your proposed Rule #2 ?
>>>
>>
>> I think constraining package acceptance to only confirmed inputs
>> is very limiting and quite dangerous for L2 protocols.
>>
>> In the case of LN, an attacker can game this and heavily restrict
>> your RBF attempts if you're only allowed to use confirmed inputs
>> and have many channels (and a limited number of confirmed inputs).
>> Otherwise you'll need node operators to pre-emptively split their
>> utxos into many small utxos just for fee bumping, which is inefficient...
>>
>> Bastien
>>
>> Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>
>>> Hi Gloria,
>>>
>>> Thanks for your answers,
>>>
>>> > In summary, it seems that the decisions that might still need
>>> > attention/input from devs on this mailing list are:
>>> > 1. Whether we should start with multiple-parent-1-child or
>>> 1-parent-1-child.
>>> > 2. Whether it's ok to require that the child not have conflicts with
>>> > mempool transactions.
>>>
>>> Yes 1) it would be good to have inputs of more potential users of
>>> package acceptance . And 2) I think it's more a matter of clearer wording
>>> of the proposal.
>>>
>>> However, see my final point on the relaxation around "unconfirmed
>>> inputs" which might in fact alter our current block construction strategy.
>>>
>>> > Right, the fact that we essentially always choose the first-seen
>>> witness is
>>> > an unfortunate limitation that exists already. Adding package mempool
>>> > accept doesn't worsen this, but the procedure in the future is to
>>> replace
>>> > the witness when it makes sense economically. We can also add logic to
>>> > allow package feerate to pay for witness replacements as well. This is
>>> > pretty far into the future, though.
>>>
>>> Yes I agree package mempool doesn't worsen this. And it's not an issue
>>> for current LN as you can't significantly inflate a spending witness for
>>> the 2-of-2 funding output.
>>> However, it might be an issue for multi-party protocol where the
>>> spending script has alternative branches with asymmetric valid witness
>>> weights. Taproot should ease that kind of script so hopefully we would
>>> deploy wtxid-replacement not too far in the future.
>>>
>>> > I could be misunderstanding, but an attacker wouldn't be able to
>>> > batch-attack like this. Alice's package only conflicts with A' + D',
>>> not A'
>>> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>>
>>> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
>>> consider.
>>>
>>> In LN, if you're trying to confirm a commitment transaction to time-out
>>> or claim on-chain a HTLC and the timelock is near-expiration, you should be
>>> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
>>> value offered by the HTLC.
>>>
>>> Following this security assumption, an attacker can exploit it by
>>> targeting together commitment transactions from different channels by
>>> blocking them under a high-fee child, of which the fee value
>>> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
>>> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
>>> from observing mempools state, victims can't learn they're targeted by the
>>> same attacker.
>>>
>>> To draw from the aforementioned topology, Mallory broadcasts A' + B' +
>>> C' + D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
>>> conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
>>> HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
>>> Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
>>> at loss on stealing P1, as she has paid more in fees but will realize a
>>> gain on P2+P3.
>>>
>>> In this model, Alice is allowed to evict those 2 transactions (A' + D')
>>> but as she is economically-bounded she won't succeed.
>>>
>>> Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think
>>> this 1st pinning scenario is correct and "lucractive" when you sum the
>>> global gain/loss.
>>>
>>> There is a 2nd attack scenario where A + B + C + D, where D is the child
>>> of A,B,C. All those transactions are honestly issued by Alice. Once A + B +
>>> C + D are propagated in network mempools, Mallory is able to replace A + D
>>> with  A' + D' where D' is paying a higher fee. This package A' + D' will
>>> confirm soon if D feerate was compelling but Mallory succeeds in delaying
>>> the confirmation
>>> of B + C for one or more blocks. As B + C are pre-signed commitments
>>> with a low-fee rate they won't confirm without Alice issuing a new child E.
>>> Mallory can repeat the same trick by broadcasting
>>> B' + E' and delay again the confirmation of C.
>>>
>>> If the remaining package pending HTLC has a higher-value than all the
>>> malicious fees over-bid, Mallory should realize a gain. With this 2nd
>>> pinning attack, the malicious entity buys confirmation delay of your
>>> packaged-together commitments.
>>>
>>> Assuming those attacks are correct, I'm leaning towards being
>>> conservative with the LDK broadcast backend. Though once again, other L2
>>> devs have likely other use-cases and opinions :)
>>>
>>> >  B' only needs to pay for itself in this case.
>>>
>>> Yes I think it's a nice discount when UTXO is single-owned. In the
>>> context of shared-owned UTXO (e.g LN), you might not if there is an
>>> in-mempool package already spending the UTXO and have to assume the
>>> worst-case scenario. I.e have B' committing enough fee to pay for A'
>>> replacement bandwidth. I think we can't do that much for this case...
>>>
>>> > If a package meets feerate requirements as a
>>> package, the parents in the transaction are allowed to replace-by-fee
>>> mempool transactions. The child cannot replace mempool transactions."
>>>
>>> I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B'
>>> to replace A+B because the first broadcast isn't satisfying anymore due to
>>> mempool spikes ? Assuming B' fees is enough, I think that case as child B'
>>> replacing in-mempool transaction B. Which I understand going against  "The
>>> child cannot replace mempool transactions".
>>>
>>> Maybe wording could be a bit clearer ?
>>>
>>> > While it would be nice to have full RBF, malleability of the child
>>> won't
>>> > block RBF here. If we're trying to replace A', we only require that A'
>>> > signals replaceability, and don't mind if its child doesn't.
>>>
>>> Yes, it sounds good.
>>>
>>> > Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
>>> miner
>>> > should prefer to utilize their block space more effectively.
>>>
>>> If your mempool is empty and only composed of A+C+D or A+B, I think
>>> taking A+C+D is the most efficient block construction you can come up with
>>> as a miner ?
>>>
>>> > No, because we don't use that model.
>>>
>>> Can you describe what miner model we are using ? Like the block
>>> construction strategy implemented by `addPackagesTxs` or also encompassing
>>> our current mempool acceptance policy, which I think rely on absolute fee
>>> over ancestor score in case of replacement ?
>>>
>>> I think this point is worthy to discuss as otherwise we might downgrade
>>> the efficiency of our current block construction strategy in periods of
>>> near-empty mempools. A knowledge which could be discreetly leveraged by a
>>> miner to gain an advantage on the rest of the mining ecosystem.
>>>
>>> Note, I think we *might* have to go in this direction if we want to
>>> replace replace-by-fee by replace-by-feerate or replace-by-ancestor and
>>> solve in-depth pinning attacks. Though if we do so,
>>> IMO we would need more thoughts.
>>>
>>> I think we could restrain package acceptance to only confirmed inputs
>>> for now and revisit later this point ? For LN-anchor, you can assume that
>>> the fee-bumping UTXO feeding the CPFP is already
>>> confirmed. Or are there currently-deployed use-cases which would benefit
>>> from your proposed Rule #2 ?
>>>
>>> Antoine
>>>
>>> Le jeu. 23 sept. 2021 à 11:36, Gloria Zhao <gloriajzhao@gmail•com> a
>>> écrit :
>>>
>>>> Hi Antoine,
>>>>
>>>> Thanks as always for your input. I'm glad we agree on so much!
>>>>
>>>> In summary, it seems that the decisions that might still need
>>>> attention/input from devs on this mailing list are:
>>>> 1. Whether we should start with multiple-parent-1-child or
>>>> 1-parent-1-child.
>>>> 2. Whether it's ok to require that the child not have conflicts with
>>>> mempool transactions.
>>>>
>>>> Responding to your comments...
>>>>
>>>> > IIUC, you have package A+B, during the dedup phase early in
>>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>>>> and A' is higher feerate than A, you trim A and replace by A' ?
>>>>
>>>> > I think this approach is safe, the one who appears unsafe to me is
>>>> when A' has a _lower_ feerate, even if A' is already accepted by our
>>>> mempool ? In that case iirc that would be a pinning.
>>>>
>>>> Right, the fact that we essentially always choose the first-seen
>>>> witness is an unfortunate limitation that exists already. Adding package
>>>> mempool accept doesn't worsen this, but the procedure in the future is to
>>>> replace the witness when it makes sense economically. We can also add logic
>>>> to allow package feerate to pay for witness replacements as well. This is
>>>> pretty far into the future, though.
>>>>
>>>> > It sounds uneconomical for an attacker but I think it's not when you
>>>> consider than you can "batch" attack against multiple honest
>>>> counterparties. E.g, Mallory broadcast A' + B' + C' + D' where A' conflicts
>>>> with Alice's honest package P1, B' conflicts with Bob's honest package P2,
>>>> C' conflicts with Caroll's honest package P3. And D' is a high-fee child of
>>>> A' + B' + C'.
>>>>
>>>> > If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of
>>>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>>>
>>>> I could be misunderstanding, but an attacker wouldn't be able to
>>>> batch-attack like this. Alice's package only conflicts with A' + D', not A'
>>>> + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>>>
>>>> > Do we assume that broadcasted packages are "honest" by default and
>>>> that the parent(s) always need the child to pass the fee checks, that way
>>>> saving the processing of individual transactions which are expected to fail
>>>> in 99% of cases or more ad hoc composition of packages at relay ?
>>>> > I think this point is quite dependent on the p2p packages
>>>> format/logic we'll end up on and that we should feel free to revisit it
>>>> later ?
>>>>
>>>> I think it's the opposite; there's no way for us to assume that p2p
>>>> packages will be "honest." I'd like to have two things before we expose on
>>>> P2P: (1) ensure that the amount of resources potentially allocated for
>>>> package validation isn't disproportionately higher than that of single
>>>> transaction validation and (2) only use package validation when we're
>>>> unsatisifed with the single validation result, e.g. we might get better
>>>> fees.
>>>> Yes, let's revisit this later :)
>>>>
>>>>  > Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>>>> discard its feerate as B should pay for all fees checked on its own. Where
>>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>>>> have a fee high enough to cover the bandwidth penalty replacement
>>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>>>
>>>>  B' only needs to pay for itself in this case.
>>>>
>>>> > > Do we want the child to be able to replace mempool transactions as
>>>> well?
>>>>
>>>> > If we mean when you have replaceable A+B then A'+B' try to replace
>>>> with a higher-feerate ? I think that's exactly the case we need for
>>>> Lightning as A+B is coming from Alice and A'+B' is coming from Bob :/
>>>>
>>>> Let me clarify this because I can see that my wording was ambiguous,
>>>> and then please let me know if it fits Lightning's needs?
>>>>
>>>> In my proposal, I wrote "If a package meets feerate requirements as a
>>>> package, the parents in the transaction are allowed to replace-by-fee
>>>> mempool transactions. The child cannot replace mempool transactions." What
>>>> I meant was: the package can replace mempool transactions if any of the
>>>> parents conflict with mempool transactions. The child cannot not conflict
>>>> with any mempool transactions.
>>>> The Lightning use case this attempts to address is: Alice and Mallory
>>>> are LN counterparties, and have packages A+B and A'+B', respectively. A and
>>>> A' are their commitment transactions and conflict with each other; they
>>>> have shared inputs and different txids.
>>>> B spends Alice's anchor output from A. B' spends Mallory's anchor
>>>> output from A'. Thus, B and B' do not conflict with each other.
>>>> Alice can broadcast her package, A+B, to replace Mallory's package,
>>>> A'+B', since B doesn't conflict with the mempool.
>>>>
>>>> Would this be ok?
>>>>
>>>> > The second option, a child of A', In the LN case I think the CPFP is
>>>> attached on one's anchor output.
>>>>
>>>> While it would be nice to have full RBF, malleability of the child
>>>> won't block RBF here. If we're trying to replace A', we only require that
>>>> A' signals replaceability, and don't mind if its child doesn't.
>>>>
>>>> > > B has an ancestor score of 10sat/vb and D has an
>>>> > > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower
>>>> than B's,
>>>> > > it fails the proposed package RBF Rule #2, so this package would be
>>>> > > rejected. Does this meet your expectations?
>>>>
>>>> > Well what sounds odd to me, in my example, we fail D even if it has a
>>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>>>> fees are 4500 sats ?
>>>>
>>>> Yes, A+C+D pays 2500sat more in fees, but it is also 1000vB larger. A
>>>> miner should prefer to utilize their block space more effectively.
>>>>
>>>> > Is this compatible with a model where a miner prioritizes absolute
>>>> fees over ancestor score, in the case that mempools aren't full-enough to
>>>> fulfill a block ?
>>>>
>>>> No, because we don't use that model.
>>>>
>>>> Thanks,
>>>> Gloria
>>>>
>>>> On Thu, Sep 23, 2021 at 5:29 AM Antoine Riard <antoine.riard@gmail•com>
>>>> wrote:
>>>>
>>>>> > Correct, if B+C is too low feerate to be accepted, we will reject
>>>>> it. I
>>>>> > prefer this because it is incentive compatible: A can be mined by
>>>>> itself,
>>>>> > so there's no reason to prefer A+B+C instead of A.
>>>>> > As another way of looking at this, consider the case where we do
>>>>> accept
>>>>> > A+B+C and it sits at the "bottom" of our mempool. If our mempool
>>>>> reaches
>>>>> > capacity, we evict the lowest descendant feerate transactions, which
>>>>> are
>>>>> > B+C in this case. This gives us the same resulting mempool, with A
>>>>> and not
>>>>> > B+C.
>>>>>
>>>>> I agree here. Doing otherwise, we might evict other transactions
>>>>> mempool in `MempoolAccept::Finalize` with a higher-feerate than B+C while
>>>>> those evicted transactions are the most compelling for block construction.
>>>>>
>>>>> I thought at first missing this acceptance requirement would break a
>>>>> fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
>>>>> attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
>>>>> child fee is capturing the parent value. I can't think of other fee-bumping
>>>>> schemes potentially affected. If they do exist I would say they're wrong in
>>>>> their design assumptions.
>>>>>
>>>>> > If or when we have witness replacement, the logic is: if the
>>>>> individual
>>>>> > transaction is enough to replace the mempool one, the replacement
>>>>> will
>>>>> > happen during the preceding individual transaction acceptance, and
>>>>> > deduplication logic will work. Otherwise, we will try to deduplicate
>>>>> by
>>>>> > wtxid, see that we need a package witness replacement, and use the
>>>>> package
>>>>> > feerate to evaluate whether this is economically rational.
>>>>>
>>>>> IIUC, you have package A+B, during the dedup phase early in
>>>>> `AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
>>>>> and A' is higher feerate than A, you trim A and replace by A' ?
>>>>>
>>>>> I think this approach is safe, the one who appears unsafe to me is
>>>>> when A' has a _lower_ feerate, even if A' is already accepted by our
>>>>> mempool ? In that case iirc that would be a pinning.
>>>>>
>>>>> Good to see progress on witness replacement before we see usage of
>>>>> Taproot tree in the context of multi-party, where a malicious counterparty
>>>>> inflates its witness to jam a honest spending.
>>>>>
>>>>> (Note, the commit linked currently points nowhere :))
>>>>>
>>>>>
>>>>> > Please note that A may replace A' even if A' has higher fees than A
>>>>> > individually, because the proposed package RBF utilizes the fees and
>>>>> size
>>>>> > of the entire package. This just requires E to pay enough fees,
>>>>> although
>>>>> > this can be pretty high if there are also potential B' and C'
>>>>> competing
>>>>> > commitment transactions that we don't know about.
>>>>>
>>>>> Ah right, if the package acceptance waives `PaysMoreThanConflicts` for
>>>>> the individual check on A, the honest package should replace the pinning
>>>>> attempt. I've not fully parsed the proposed implementation yet.
>>>>>
>>>>> Though note, I think it's still unsafe for a Lightning
>>>>> multi-commitment-broadcast-as-one-package as a malicious A' might have an
>>>>> absolute fee higher than E. It sounds uneconomical for
>>>>> an attacker but I think it's not when you consider than you can
>>>>> "batch" attack against multiple honest counterparties. E.g, Mallory
>>>>> broadcast A' + B' + C' + D' where A' conflicts with Alice's honest package
>>>>> P1, B' conflicts with Bob's honest package P2, C' conflicts with Caroll's
>>>>> honest package P3. And D' is a high-fee child of A' + B' + C'.
>>>>>
>>>>> If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of
>>>>> HTLCs confirmed by P1+P2+P3, I think it's lucrative for the attacker ?
>>>>>
>>>>> > So far, my understanding is that multi-parent-1-child is desired for
>>>>> > batched fee-bumping (
>>>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>>>> and
>>>>> > I've also seen your response which I have less context on (
>>>>> > https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>>>> That
>>>>> > being said, I am happy to create a new proposal for 1 parent + 1
>>>>> child
>>>>> > (which would be slightly simpler) and plan for moving to
>>>>> > multi-parent-1-child later if that is preferred. I am very
>>>>> interested in
>>>>> > hearing feedback on that approach.
>>>>>
>>>>> I think batched fee-bumping is okay as long as you don't have
>>>>> time-sensitive outputs encumbering your commitment transactions. For the
>>>>> reasons mentioned above, I think that's unsafe.
>>>>>
>>>>> What I'm worried about is  L2 developers, potentially not aware about
>>>>> all the mempool subtleties blurring the difference and always batching
>>>>> their broadcast by default.
>>>>>
>>>>> IMO, a good thing by restraining to 1-parent + 1 child,  we
>>>>> artificially constraint L2 design space for now and minimize risks of
>>>>> unsafe usage of the package API :)
>>>>>
>>>>> I think that's a point where it would be relevant to have the opinion
>>>>> of more L2 devs.
>>>>>
>>>>> > I think there is a misunderstanding here - let me describe what I'm
>>>>> > proposing we'd do in this situation: we'll try individual submission
>>>>> for A,
>>>>> > see that it fails due to "insufficient fees." Then, we'll try package
>>>>> > validation for A+B and use package RBF. If A+B pays enough, it can
>>>>> still
>>>>> > replace A'. If A fails for a bad signature, we won't look at B or
>>>>> A+B. Does
>>>>> > this meet your expectations?
>>>>>
>>>>> Yes there was a misunderstanding, I think this approach is correct,
>>>>> it's more a question of performance. Do we assume that broadcasted packages
>>>>> are "honest" by default and that the parent(s) always need the child to
>>>>> pass the fee checks, that way saving the processing of individual
>>>>> transactions which are expected to fail in 99% of cases or more ad hoc
>>>>> composition of packages at relay ?
>>>>>
>>>>> I think this point is quite dependent on the p2p packages format/logic
>>>>> we'll end up on and that we should feel free to revisit it later ?
>>>>>
>>>>>
>>>>> > What problem are you trying to solve by the package feerate *after*
>>>>> dedup
>>>>> rule ?
>>>>> > My understanding is that an in-package transaction might be already
>>>>> in
>>>>> the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>>> the
>>>>> vsize of this transaction could be discarded lowering the cost of
>>>>> package
>>>>> RBF.
>>>>>
>>>>> > I'm proposing that, when a transaction has already been submitted to
>>>>> > mempool, we would ignore both its fees and vsize when calculating
>>>>> package
>>>>> > feerate.
>>>>>
>>>>> Yes, if you receive A+B, and A is already in-mempoo, I agree you can
>>>>> discard its feerate as B should pay for all fees checked on its own. Where
>>>>> I'm unclear is when you have in-mempool A+B and receive A+B'. Should B'
>>>>> have a fee high enough to cover the bandwidth penalty replacement
>>>>> (`PaysForRBF`, 2nd check) of both A+B' or only B' ?
>>>>>
>>>>> If you have a second-layer like current Lightning, you might have a
>>>>> counterparty commitment to replace and should always expect to have to pay
>>>>> for parent replacement bandwidth.
>>>>>
>>>>> Where a potential discount sounds interesting is when you have an
>>>>> univoque state on the first-stage of transactions. E.g DLC's funding
>>>>> transaction which might be CPFP by any participant iirc.
>>>>>
>>>>> > Note that, if C' conflicts with C, it also conflicts with D, since D
>>>>> is a
>>>>> > descendant of C and would thus need to be evicted along with it.
>>>>>
>>>>> Ah once again I think it's a misunderstanding without the code under
>>>>> my eyes! If we do C' `PreChecks`, solve the conflicts provoked by it, i.e
>>>>> mark for potential eviction D and don't consider it for future conflicts in
>>>>> the rest of the package, I think D' `PreChecks` should be good ?
>>>>>
>>>>> > More generally, this example is surprising to me because I didn't
>>>>> think
>>>>> > packages would be used to fee-bump replaceable transactions. Do we
>>>>> want the
>>>>> > child to be able to replace mempool transactions as well?
>>>>>
>>>>> If we mean when you have replaceable A+B then A'+B' try to replace
>>>>> with a higher-feerate ? I think that's exactly the case we need for
>>>>> Lightning as A+B is coming from Alice and A'+B' is coming from Bob :/
>>>>>
>>>>> > I'm not sure what you mean? Let's say we have a package of parent A
>>>>> + child
>>>>> > B, where A is supposed to replace a mempool transaction A'. Are you
>>>>> saying
>>>>> > that counterparties are able to malleate the package child B, or a
>>>>> child of
>>>>> > A'?
>>>>>
>>>>> The second option, a child of A', In the LN case I think the CPFP is
>>>>> attached on one's anchor output.
>>>>>
>>>>> I think it's good if we assume the
>>>>> solve-conflicts-after-parent's`'PreChecks` mentioned above or fixing
>>>>> inherited signaling or full-rbf ?
>>>>>
>>>>> > Sorry, I don't understand what you mean by "preserve the package
>>>>> > integrity?" Could you elaborate?
>>>>>
>>>>> After thinking the relaxation about the "new" unconfirmed input is not
>>>>> linked to trimming but I would say more to the multi-parent support.
>>>>>
>>>>> Let's say you have A+B trying to replace C+D where B is also spending
>>>>> already in-mempool E. To succeed, you need to waive the no-new-unconfirmed
>>>>> input as D isn't spending E.
>>>>>
>>>>> So good, I think we agree on the problem description here.
>>>>>
>>>>> > I am in agreement with your calculations but unsure if we disagree
>>>>> on the
>>>>> > expected outcome. Yes, B has an ancestor score of 10sat/vb and D has
>>>>> an
>>>>> > ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than
>>>>> B's,
>>>>> > it fails the proposed package RBF Rule #2, so this package would be
>>>>> > rejected. Does this meet your expectations?
>>>>>
>>>>> Well what sounds odd to me, in my example, we fail D even if it has a
>>>>> higher-fee than B. Like A+B absolute fees are 2000 sats and A+C+D absolute
>>>>> fees are 4500 sats ?
>>>>>
>>>>> Is this compatible with a model where a miner prioritizes absolute
>>>>> fees over ancestor score, in the case that mempools aren't full-enough to
>>>>> fulfill a block ?
>>>>>
>>>>> Let me know if I can clarify a point.
>>>>>
>>>>> Antoine
>>>>>
>>>>> Le lun. 20 sept. 2021 à 11:10, Gloria Zhao <gloriajzhao@gmail•com> a
>>>>> écrit :
>>>>>
>>>>>>
>>>>>> Hi Antoine,
>>>>>>
>>>>>> First of all, thank you for the thorough review. I appreciate your
>>>>>> insight on LN requirements.
>>>>>>
>>>>>> > IIUC, you have a package A+B+C submitted for acceptance and A is
>>>>>> already in your mempool. You trim out A from the package and then evaluate
>>>>>> B+C.
>>>>>>
>>>>>> > I think this might be an issue if A is the higher-fee element of
>>>>>> the ABC package. B+C package fees might be under the mempool min fee and
>>>>>> will be rejected, potentially breaking the acceptance expectations of the
>>>>>> package issuer ?
>>>>>>
>>>>>> Correct, if B+C is too low feerate to be accepted, we will reject it.
>>>>>> I prefer this because it is incentive compatible: A can be mined by itself,
>>>>>> so there's no reason to prefer A+B+C instead of A.
>>>>>> As another way of looking at this, consider the case where we do
>>>>>> accept A+B+C and it sits at the "bottom" of our mempool. If our mempool
>>>>>> reaches capacity, we evict the lowest descendant feerate transactions,
>>>>>> which are B+C in this case. This gives us the same resulting mempool, with
>>>>>> A and not B+C.
>>>>>>
>>>>>>
>>>>>> > Further, I think the dedup should be done on wtxid, as you might
>>>>>> have multiple valid witnesses. Though with varying vsizes and as such
>>>>>> offering different feerates.
>>>>>>
>>>>>> I agree that variations of the same package with different witnesses
>>>>>> is a case that must be handled. I consider witness replacement to be a
>>>>>> project that can be done in parallel to package mempool acceptance because
>>>>>> being able to accept packages does not worsen the problem of a
>>>>>> same-txid-different-witness "pinning" attack.
>>>>>>
>>>>>> If or when we have witness replacement, the logic is: if the
>>>>>> individual transaction is enough to replace the mempool one, the
>>>>>> replacement will happen during the preceding individual transaction
>>>>>> acceptance, and deduplication logic will work. Otherwise, we will try to
>>>>>> deduplicate by wtxid, see that we need a package witness replacement, and
>>>>>> use the package feerate to evaluate whether this is economically rational.
>>>>>>
>>>>>> See the #22290 "handle package transactions already in mempool"
>>>>>> commit (
>>>>>> https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff),
>>>>>> which handles the case of same-txid-different-witness by simply using the
>>>>>> transaction in the mempool for now, with TODOs for what I just described.
>>>>>>
>>>>>>
>>>>>> > I'm not clearly understanding the accepted topologies. By "parent
>>>>>> and child to share a parent", do you mean the set of transactions A, B, C,
>>>>>> where B is spending A and C is spending A and B would be correct ?
>>>>>>
>>>>>> Yes, that is what I meant. Yes, that would a valid package under
>>>>>> these rules.
>>>>>>
>>>>>> > If yes, is there a width-limit introduced or we fallback on
>>>>>> MAX_PACKAGE_COUNT=25 ?
>>>>>>
>>>>>> No, there is no limit on connectivity other than "child with all
>>>>>> unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's
>>>>>> in-mempool + in-package ancestor limits.
>>>>>>
>>>>>>
>>>>>> > Considering the current Core's mempool acceptance rules, I think
>>>>>> CPFP batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>>>> jamming successful on one channel commitment transaction would contamine
>>>>>> the remaining commitments sharing the same package.
>>>>>>
>>>>>> > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are
>>>>>> commitment transactions and E a shared CPFP. If a malicious A' transaction
>>>>>> has a better feerate than A, the whole package acceptance will fail. Even
>>>>>> if A' confirms in the following block,
>>>>>> the propagation and confirmation of B+C+D have been delayed. This
>>>>>> could carry on a loss of funds.
>>>>>>
>>>>>> Please note that A may replace A' even if A' has higher fees than A
>>>>>> individually, because the proposed package RBF utilizes the fees and size
>>>>>> of the entire package. This just requires E to pay enough fees, although
>>>>>> this can be pretty high if there are also potential B' and C' competing
>>>>>> commitment transactions that we don't know about.
>>>>>>
>>>>>>
>>>>>> > IMHO, I'm leaning towards deploying during a first phase
>>>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>>>> second-layer safety.
>>>>>>
>>>>>> So far, my understanding is that multi-parent-1-child is desired for
>>>>>> batched fee-bumping (
>>>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289)
>>>>>> and I've also seen your response which I have less context on (
>>>>>> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
>>>>>> That being said, I am happy to create a new proposal for 1 parent + 1 child
>>>>>> (which would be slightly simpler) and plan for moving to
>>>>>> multi-parent-1-child later if that is preferred. I am very interested in
>>>>>> hearing feedback on that approach.
>>>>>>
>>>>>>
>>>>>> > If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>>>> acceptance fails. For this reason I think the individual RBF should be
>>>>>> bypassed and only the package RBF apply ?
>>>>>>
>>>>>> I think there is a misunderstanding here - let me describe what I'm
>>>>>> proposing we'd do in this situation: we'll try individual submission for A,
>>>>>> see that it fails due to "insufficient fees." Then, we'll try package
>>>>>> validation for A+B and use package RBF. If A+B pays enough, it can still
>>>>>> replace A'. If A fails for a bad signature, we won't look at B or A+B. Does
>>>>>> this meet your expectations?
>>>>>>
>>>>>>
>>>>>> > What problem are you trying to solve by the package feerate *after*
>>>>>> dedup rule ?
>>>>>> > My understanding is that an in-package transaction might be already
>>>>>> in the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>>>> the vsize of this transaction could be discarded lowering the cost of
>>>>>> package RBF.
>>>>>>
>>>>>> I'm proposing that, when a transaction has already been submitted to
>>>>>> mempool, we would ignore both its fees and vsize when calculating package
>>>>>> feerate. In example G2, we shouldn't count M1 fees after its submission to
>>>>>> mempool, since M1's fees have already been used to pay for its individual
>>>>>> bandwidth, and it shouldn't be used again to pay for P2 and P3's bandwidth.
>>>>>> We also shouldn't count its vsize, since it has already been paid for.
>>>>>>
>>>>>>
>>>>>> > I think this is a footgunish API, as if a package issuer send the
>>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>>>> higher-feerate/higher-fee package always replaces ?
>>>>>>
>>>>>> Note that, if C' conflicts with C, it also conflicts with D, since D
>>>>>> is a descendant of C and would thus need to be evicted along with it.
>>>>>> Implicitly, D' would not be in conflict with D.
>>>>>> More generally, this example is surprising to me because I didn't
>>>>>> think packages would be used to fee-bump replaceable transactions. Do we
>>>>>> want the child to be able to replace mempool transactions as well? This can
>>>>>> be implemented with a bit of additional logic.
>>>>>>
>>>>>> > I think this is unsafe for L2s if counterparties have malleability
>>>>>> of the child transaction. They can block your package replacement by
>>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>>>> ability.
>>>>>>
>>>>>> I'm not sure what you mean? Let's say we have a package of parent A +
>>>>>> child B, where A is supposed to replace a mempool transaction A'. Are you
>>>>>> saying that counterparties are able to malleate the package child B, or a
>>>>>> child of A'? If they can malleate a child of A', that shouldn't matter as
>>>>>> long as A' is signaling replacement. This would be handled identically with
>>>>>> full RBF and what Core currently implements.
>>>>>>
>>>>>> > I think this is an issue brought by the trimming during the dedup
>>>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>>>
>>>>>> Sorry, I don't understand what you mean by "preserve the package
>>>>>> integrity?" Could you elaborate?
>>>>>>
>>>>>> > Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>>>
>>>>>> > Package A + B ancestor score is 10 sat/vb.
>>>>>>
>>>>>> > D has a higher feerate/absolute fee than B.
>>>>>>
>>>>>> > Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats +
>>>>>> C's 1000 sats + D's 1500 sats) / A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>>>
>>>>>> I am in agreement with your calculations but unsure if we disagree on
>>>>>> the expected outcome. Yes, B has an ancestor score of 10sat/vb and D has an
>>>>>> ancestor score of ~2.9sat/vb. Since D's ancestor score is lower than B's,
>>>>>> it fails the proposed package RBF Rule #2, so this package would be
>>>>>> rejected. Does this meet your expectations?
>>>>>>
>>>>>> Thank you for linking to projects that might be interested in package
>>>>>> relay :)
>>>>>>
>>>>>> Thanks,
>>>>>> Gloria
>>>>>>
>>>>>> On Mon, Sep 20, 2021 at 12:16 AM Antoine Riard <
>>>>>> antoine.riard@gmail•com> wrote:
>>>>>>
>>>>>>> Hi Gloria,
>>>>>>>
>>>>>>> > A package may contain transactions that are already in the
>>>>>>> mempool. We
>>>>>>> > remove
>>>>>>> > ("deduplicate") those transactions from the package for the
>>>>>>> purposes of
>>>>>>> > package
>>>>>>> > mempool acceptance. If a package is empty after deduplication, we
>>>>>>> do
>>>>>>> > nothing.
>>>>>>>
>>>>>>> IIUC, you have a package A+B+C submitted for acceptance and A is
>>>>>>> already in your mempool. You trim out A from the package and then evaluate
>>>>>>> B+C.
>>>>>>>
>>>>>>> I think this might be an issue if A is the higher-fee element of the
>>>>>>> ABC package. B+C package fees might be under the mempool min fee and will
>>>>>>> be rejected, potentially breaking the acceptance expectations of the
>>>>>>> package issuer ?
>>>>>>>
>>>>>>> Further, I think the dedup should be done on wtxid, as you might
>>>>>>> have multiple valid witnesses. Though with varying vsizes and as such
>>>>>>> offering different feerates.
>>>>>>>
>>>>>>> E.g you're going to evaluate the package A+B and A' is already in
>>>>>>> your mempool with a bigger valid witness. You trim A based on txid, then
>>>>>>> you evaluate A'+B, which fails the fee checks. However, evaluating A+B
>>>>>>> would have been a success.
>>>>>>>
>>>>>>> AFAICT, the dedup rationale would be to save on CPU time/IO disk, to
>>>>>>> avoid repeated signatures verification and parent UTXOs fetches ? Can we
>>>>>>> achieve the same goal by bypassing tx-level checks for already-in txn while
>>>>>>> conserving the package integrity for package-level checks ?
>>>>>>>
>>>>>>> > Note that it's possible for the parents to be
>>>>>>> > indirect
>>>>>>> > descendants/ancestors of one another, or for parent and child to
>>>>>>> share a
>>>>>>> > parent,
>>>>>>> > so we cannot make any other topology assumptions.
>>>>>>>
>>>>>>> I'm not clearly understanding the accepted topologies. By "parent
>>>>>>> and child to share a parent", do you mean the set of transactions A, B, C,
>>>>>>> where B is spending A and C is spending A and B would be correct ?
>>>>>>>
>>>>>>> If yes, is there a width-limit introduced or we fallback on
>>>>>>> MAX_PACKAGE_COUNT=25 ?
>>>>>>>
>>>>>>> IIRC, one rationale to come with this topology limitation was to
>>>>>>> lower the DoS risks when potentially deploying p2p packages.
>>>>>>>
>>>>>>> Considering the current Core's mempool acceptance rules, I think
>>>>>>> CPFP batching is unsafe for LN time-sensitive closure. A malicious tx-relay
>>>>>>> jamming successful on one channel commitment transaction would contamine
>>>>>>> the remaining commitments sharing the same package.
>>>>>>>
>>>>>>> E.g, you broadcast the package A+B+C+D+E where A,B,C,D are
>>>>>>> commitment transactions and E a shared CPFP. If a malicious A' transaction
>>>>>>> has a better feerate than A, the whole package acceptance will fail. Even
>>>>>>> if A' confirms in the following block,
>>>>>>> the propagation and confirmation of B+C+D have been delayed. This
>>>>>>> could carry on a loss of funds.
>>>>>>>
>>>>>>> That said, if you're broadcasting commitment transactions without
>>>>>>> time-sensitive HTLC outputs, I think the batching is effectively a fee
>>>>>>> saving as you don't have to duplicate the CPFP.
>>>>>>>
>>>>>>> IMHO, I'm leaning towards deploying during a first phase
>>>>>>> 1-parent/1-child. I think it's the most conservative step still improving
>>>>>>> second-layer safety.
>>>>>>>
>>>>>>> > *Rationale*:  It would be incorrect to use the fees of
>>>>>>> transactions that are
>>>>>>> > already in the mempool, as we do not want a transaction's fees to
>>>>>>> be
>>>>>>> > double-counted for both its individual RBF and package RBF.
>>>>>>>
>>>>>>> I'm unsure about the logical order of the checks proposed.
>>>>>>>
>>>>>>> If A+B is submitted to replace A', where A pays 0 sats, B pays 200
>>>>>>> sats and A' pays 100 sats. If we apply the individual RBF on A, A+B
>>>>>>> acceptance fails. For this reason I think the individual RBF should be
>>>>>>> bypassed and only the package RBF apply ?
>>>>>>>
>>>>>>> Note this situation is plausible, with current LN design, your
>>>>>>> counterparty can have a commitment transaction with a better fee just by
>>>>>>> selecting a higher `dust_limit_satoshis` than yours.
>>>>>>>
>>>>>>> > Examples F and G [14] show the same package, but P1 is submitted
>>>>>>> > individually before
>>>>>>> > the package in example G. In example F, we can see that the 300vB
>>>>>>> package
>>>>>>> > pays
>>>>>>> > an additional 200sat in fees, which is not enough to pay for its
>>>>>>> own
>>>>>>> > bandwidth
>>>>>>> > (BIP125#4). In example G, we can see that P1 pays enough to
>>>>>>> replace M1, but
>>>>>>> > using P1's fees again during package submission would make it look
>>>>>>> like a
>>>>>>> > 300sat
>>>>>>> > increase for a 200vB package. Even including its fees and size
>>>>>>> would not be
>>>>>>> > sufficient in this example, since the 300sat looks like enough for
>>>>>>> the 300vB
>>>>>>> > package. The calculcation after deduplication is 100sat increase
>>>>>>> for a
>>>>>>> > package
>>>>>>> > of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>>> transactions have
>>>>>>> > a
>>>>>>> > size of 100vB.
>>>>>>>
>>>>>>> What problem are you trying to solve by the package feerate *after*
>>>>>>> dedup rule ?
>>>>>>>
>>>>>>> My understanding is that an in-package transaction might be already
>>>>>>> in the mempool. Therefore, to compute a correct RBF penalty replacement,
>>>>>>> the vsize of this transaction could be discarded lowering the cost of
>>>>>>> package RBF.
>>>>>>>
>>>>>>> If we keep a "safe" dedup mechanism (see my point above), I think
>>>>>>> this discount is justified, as the validation cost of node operators is
>>>>>>> paid for ?
>>>>>>>
>>>>>>> > The child cannot replace mempool transactions.
>>>>>>>
>>>>>>> Let's say you issue package A+B, then package C+B', where B' is a
>>>>>>> child of both A and C. This rule fails the acceptance of C+B' ?
>>>>>>>
>>>>>>> I think this is a footgunish API, as if a package issuer send the
>>>>>>> multiple-parent-one-child package A,B,C,D where D is the child of A,B,C.
>>>>>>> Then try to broadcast the higher-feerate C'+D' package, it should be
>>>>>>> rejected. So it's breaking the naive broadcaster assumption that a
>>>>>>> higher-feerate/higher-fee package always replaces ? And it might be unsafe
>>>>>>> in protocols where states are symmetric. E.g a malicious counterparty
>>>>>>> broadcasts first S+A, then you honestly broadcast S+B, where B pays better
>>>>>>> fees.
>>>>>>>
>>>>>>> > All mempool transactions to be replaced must signal replaceability.
>>>>>>>
>>>>>>> I think this is unsafe for L2s if counterparties have malleability
>>>>>>> of the child transaction. They can block your package replacement by
>>>>>>> opting-out from RBF signaling. IIRC, LN's "anchor output" presents such an
>>>>>>> ability.
>>>>>>>
>>>>>>> I think it's better to either fix inherited signaling or move
>>>>>>> towards full-rbf.
>>>>>>>
>>>>>>> > if a package parent has already been submitted, it would
>>>>>>> > look
>>>>>>> >like the child is spending a "new" unconfirmed input.
>>>>>>>
>>>>>>> I think this is an issue brought by the trimming during the dedup
>>>>>>> phase. If we preserve the package integrity, only re-using the tx-level
>>>>>>> checks results of already in-mempool transactions to gain in CPU time we
>>>>>>> won't have this issue. Package childs can add unconfirmed inputs as long as
>>>>>>> they're in-package, the bip125 rule2 is only evaluated against parents ?
>>>>>>>
>>>>>>> > However, we still achieve the same goal of requiring the
>>>>>>> > replacement
>>>>>>> > transactions to have a ancestor score at least as high as the
>>>>>>> original
>>>>>>> > ones.
>>>>>>>
>>>>>>> I'm not sure if this holds...
>>>>>>>
>>>>>>> Let's say you have in-mempool A, B where A pays 10 sat/vb for 100
>>>>>>> vbytes and B pays 10 sat/vb for 100 vbytes. You have the candidate
>>>>>>> replacement D spending both A and C where D pays 15sat/vb for 100 vbytes
>>>>>>> and C pays 1 sat/vb for 1000 vbytes.
>>>>>>>
>>>>>>> Package A + B ancestor score is 10 sat/vb.
>>>>>>>
>>>>>>> D has a higher feerate/absolute fee than B.
>>>>>>>
>>>>>>> Package A + C + D ancestor score is ~ 3 sat/vb ((A's 1000 sats + C's
>>>>>>> 1000 sats + D's 1500 sats) /
>>>>>>> A's 100 vb + C's 1000 vb + D's 100 vb)
>>>>>>>
>>>>>>> Overall, this is a review through the lenses of LN requirements. I
>>>>>>> think other L2 protocols/applications
>>>>>>> could be candidates to using package accept/relay such as:
>>>>>>> * https://github.com/lightninglabs/pool
>>>>>>> * https://github.com/discreetlogcontracts/dlcspecs
>>>>>>> * https://github.com/bitcoin-teleport/teleport-transactions/
>>>>>>> * https://github.com/sapio-lang/sapio
>>>>>>> *
>>>>>>> https://github.com/commerceblock/mercury/blob/master/doc/statechains.md
>>>>>>> * https://github.com/revault/practical-revault
>>>>>>>
>>>>>>> Thanks for rolling forward the ball on this subject.
>>>>>>>
>>>>>>> Antoine
>>>>>>>
>>>>>>> Le jeu. 16 sept. 2021 à 03:55, Gloria Zhao via bitcoin-dev <
>>>>>>> bitcoin-dev@lists•linuxfoundation.org> a écrit :
>>>>>>>
>>>>>>>> Hi there,
>>>>>>>>
>>>>>>>> I'm writing to propose a set of mempool policy changes to enable
>>>>>>>> package
>>>>>>>> validation (in preparation for package relay) in Bitcoin Core.
>>>>>>>> These would not
>>>>>>>> be consensus or P2P protocol changes. However, since mempool policy
>>>>>>>> significantly affects transaction propagation, I believe this is
>>>>>>>> relevant for
>>>>>>>> the mailing list.
>>>>>>>>
>>>>>>>> My proposal enables packages consisting of multiple parents and 1
>>>>>>>> child. If you
>>>>>>>> develop software that relies on specific transaction relay
>>>>>>>> assumptions and/or
>>>>>>>> are interested in using package relay in the future, I'm very
>>>>>>>> interested to hear
>>>>>>>> your feedback on the utility or restrictiveness of these package
>>>>>>>> policies for
>>>>>>>> your use cases.
>>>>>>>>
>>>>>>>> A draft implementation of this proposal can be found in [Bitcoin
>>>>>>>> Core
>>>>>>>> PR#22290][1].
>>>>>>>>
>>>>>>>> An illustrated version of this post can be found at
>>>>>>>> https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a.
>>>>>>>> I have also linked the images below.
>>>>>>>>
>>>>>>>> ## Background
>>>>>>>>
>>>>>>>> Feel free to skip this section if you are already familiar with
>>>>>>>> mempool policy
>>>>>>>> and package relay terminology.
>>>>>>>>
>>>>>>>> ### Terminology Clarifications
>>>>>>>>
>>>>>>>> * Package = an ordered list of related transactions, representable
>>>>>>>> by a Directed
>>>>>>>>   Acyclic Graph.
>>>>>>>> * Package Feerate = the total modified fees divided by the total
>>>>>>>> virtual size of
>>>>>>>>   all transactions in the package.
>>>>>>>>     - Modified fees = a transaction's base fees + fee delta applied
>>>>>>>> by the user
>>>>>>>>       with `prioritisetransaction`. As such, we expect this to vary
>>>>>>>> across
>>>>>>>> mempools.
>>>>>>>>     - Virtual Size = the maximum of virtual sizes calculated using
>>>>>>>> [BIP141
>>>>>>>>       virtual size][2] and sigop weight. [Implemented here in
>>>>>>>> Bitcoin Core][3].
>>>>>>>>     - Note that feerate is not necessarily based on the base fees
>>>>>>>> and serialized
>>>>>>>>       size.
>>>>>>>>
>>>>>>>> * Fee-Bumping = user/wallet actions that take advantage of miner
>>>>>>>> incentives to
>>>>>>>>   boost a transaction's candidacy for inclusion in a block,
>>>>>>>> including Child Pays
>>>>>>>> for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our
>>>>>>>> intention in
>>>>>>>> mempool policy is to recognize when the new transaction is more
>>>>>>>> economical to
>>>>>>>> mine than the original one(s) but not open DoS vectors, so there
>>>>>>>> are some
>>>>>>>> limitations.
>>>>>>>>
>>>>>>>> ### Policy
>>>>>>>>
>>>>>>>> The purpose of the mempool is to store the best (to be most
>>>>>>>> incentive-compatible
>>>>>>>> with miners, highest feerate) candidates for inclusion in a block.
>>>>>>>> Miners use
>>>>>>>> the mempool to build block templates. The mempool is also useful as
>>>>>>>> a cache for
>>>>>>>> boosting block relay and validation performance, aiding transaction
>>>>>>>> relay, and
>>>>>>>> generating feerate estimations.
>>>>>>>>
>>>>>>>> Ideally, all consensus-valid transactions paying reasonable fees
>>>>>>>> should make it
>>>>>>>> to miners through normal transaction relay, without any special
>>>>>>>> connectivity or
>>>>>>>> relationships with miners. On the other hand, nodes do not have
>>>>>>>> unlimited
>>>>>>>> resources, and a P2P network designed to let any honest node
>>>>>>>> broadcast their
>>>>>>>> transactions also exposes the transaction validation engine to DoS
>>>>>>>> attacks from
>>>>>>>> malicious peers.
>>>>>>>>
>>>>>>>> As such, for unconfirmed transactions we are considering for our
>>>>>>>> mempool, we
>>>>>>>> apply a set of validation rules in addition to consensus, primarily
>>>>>>>> to protect
>>>>>>>> us from resource exhaustion and aid our efforts to keep the highest
>>>>>>>> fee
>>>>>>>> transactions. We call this mempool _policy_: a set of (configurable,
>>>>>>>> node-specific) rules that transactions must abide by in order to be
>>>>>>>> accepted
>>>>>>>> into our mempool. Transaction "Standardness" rules and mempool
>>>>>>>> restrictions such
>>>>>>>> as "too-long-mempool-chain" are both examples of policy.
>>>>>>>>
>>>>>>>> ### Package Relay and Package Mempool Accept
>>>>>>>>
>>>>>>>> In transaction relay, we currently consider transactions one at a
>>>>>>>> time for
>>>>>>>> submission to the mempool. This creates a limitation in the node's
>>>>>>>> ability to
>>>>>>>> determine which transactions have the highest feerates, since we
>>>>>>>> cannot take
>>>>>>>> into account descendants (i.e. cannot use CPFP) until all the
>>>>>>>> transactions are
>>>>>>>> in the mempool. Similarly, we cannot use a transaction's
>>>>>>>> descendants when
>>>>>>>> considering it for RBF. When an individual transaction does not
>>>>>>>> meet the mempool
>>>>>>>> minimum feerate and the user isn't able to create a replacement
>>>>>>>> transaction
>>>>>>>> directly, it will not be accepted by mempools.
>>>>>>>>
>>>>>>>> This limitation presents a security issue for applications and
>>>>>>>> users relying on
>>>>>>>> time-sensitive transactions. For example, Lightning and other
>>>>>>>> protocols create
>>>>>>>> UTXOs with multiple spending paths, where one counterparty's
>>>>>>>> spending path opens
>>>>>>>> up after a timelock, and users are protected from cheating
>>>>>>>> scenarios as long as
>>>>>>>> they redeem on-chain in time. A key security assumption is that all
>>>>>>>> parties'
>>>>>>>> transactions will propagate and confirm in a timely manner. This
>>>>>>>> assumption can
>>>>>>>> be broken if fee-bumping does not work as intended.
>>>>>>>>
>>>>>>>> The end goal for Package Relay is to consider multiple transactions
>>>>>>>> at the same
>>>>>>>> time, e.g. a transaction with its high-fee child. This may help us
>>>>>>>> better
>>>>>>>> determine whether transactions should be accepted to our mempool,
>>>>>>>> especially if
>>>>>>>> they don't meet fee requirements individually or are better RBF
>>>>>>>> candidates as a
>>>>>>>> package. A combination of changes to mempool validation logic,
>>>>>>>> policy, and
>>>>>>>> transaction relay allows us to better propagate the transactions
>>>>>>>> with the
>>>>>>>> highest package feerates to miners, and makes fee-bumping tools
>>>>>>>> more powerful
>>>>>>>> for users.
>>>>>>>>
>>>>>>>> The "relay" part of Package Relay suggests P2P messaging changes,
>>>>>>>> but a large
>>>>>>>> part of the changes are in the mempool's package validation logic.
>>>>>>>> We call this
>>>>>>>> *Package Mempool Accept*.
>>>>>>>>
>>>>>>>> ### Previous Work
>>>>>>>>
>>>>>>>> * Given that mempool validation is DoS-sensitive and complex, it
>>>>>>>> would be
>>>>>>>>   dangerous to haphazardly tack on package validation logic. Many
>>>>>>>> efforts have
>>>>>>>> been made to make mempool validation less opaque (see [#16400][4],
>>>>>>>> [#21062][5],
>>>>>>>> [#22675][6], [#22796][7]).
>>>>>>>> * [#20833][8] Added basic capabilities for package validation, test
>>>>>>>> accepts only
>>>>>>>>   (no submission to mempool).
>>>>>>>> * [#21800][9] Implemented package ancestor/descendant limit checks
>>>>>>>> for arbitrary
>>>>>>>>   packages. Still test accepts only.
>>>>>>>> * Previous package relay proposals (see [#16401][10], [#19621][11]).
>>>>>>>>
>>>>>>>> ### Existing Package Rules
>>>>>>>>
>>>>>>>> These are in master as introduced in [#20833][8] and [#21800][9].
>>>>>>>> I'll consider
>>>>>>>> them as "given" in the rest of this document, though they can be
>>>>>>>> changed, since
>>>>>>>> package validation is test-accept only right now.
>>>>>>>>
>>>>>>>> 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and
>>>>>>>> `MAX_PACKAGE_SIZE=101KvB` total size [8]
>>>>>>>>
>>>>>>>>    *Rationale*: This is already enforced as mempool
>>>>>>>> ancestor/descendant limits.
>>>>>>>> Presumably, transactions in a package are all related, so exceeding
>>>>>>>> this limit
>>>>>>>> would mean that the package can either be split up or it wouldn't
>>>>>>>> pass this
>>>>>>>> mempool policy.
>>>>>>>>
>>>>>>>> 2. Packages must be topologically sorted: if any dependencies exist
>>>>>>>> between
>>>>>>>> transactions, parents must appear somewhere before children. [8]
>>>>>>>>
>>>>>>>> 3. A package cannot have conflicting transactions, i.e. none of
>>>>>>>> them can spend
>>>>>>>> the same inputs. This also means there cannot be duplicate
>>>>>>>> transactions. [8]
>>>>>>>>
>>>>>>>> 4. When packages are evaluated against ancestor/descendant limits
>>>>>>>> in a test
>>>>>>>> accept, the union of all of their descendants and ancestors is
>>>>>>>> considered. This
>>>>>>>> is essentially a "worst case" heuristic where every transaction in
>>>>>>>> the package
>>>>>>>> is treated as each other's ancestor and descendant. [8]
>>>>>>>> Packages for which ancestor/descendant limits are accurately
>>>>>>>> captured by this
>>>>>>>> heuristic: [19]
>>>>>>>>
>>>>>>>> There are also limitations such as the fact that CPFP carve out is
>>>>>>>> not applied
>>>>>>>> to package transactions. #20833 also disables RBF in package
>>>>>>>> validation; this
>>>>>>>> proposal overrides that to allow packages to use RBF.
>>>>>>>>
>>>>>>>> ## Proposed Changes
>>>>>>>>
>>>>>>>> The next step in the Package Mempool Accept project is to implement
>>>>>>>> submission
>>>>>>>> to mempool, initially through RPC only. This allows us to test the
>>>>>>>> submission
>>>>>>>> logic before exposing it on P2P.
>>>>>>>>
>>>>>>>> ### Summary
>>>>>>>>
>>>>>>>> - Packages may contain already-in-mempool transactions.
>>>>>>>> - Packages are 2 generations, Multi-Parent-1-Child.
>>>>>>>> - Fee-related checks use the package feerate. This means that
>>>>>>>> wallets can
>>>>>>>> create a package that utilizes CPFP.
>>>>>>>> - Parents are allowed to RBF mempool transactions with a set of
>>>>>>>> rules similar
>>>>>>>>   to BIP125. This enables a combination of CPFP and RBF, where a
>>>>>>>> transaction's descendant fees pay for replacing mempool conflicts.
>>>>>>>>
>>>>>>>> There is a draft implementation in [#22290][1]. It is WIP, but
>>>>>>>> feedback is
>>>>>>>> always welcome.
>>>>>>>>
>>>>>>>> ### Details
>>>>>>>>
>>>>>>>> #### Packages May Contain Already-in-Mempool Transactions
>>>>>>>>
>>>>>>>> A package may contain transactions that are already in the mempool.
>>>>>>>> We remove
>>>>>>>> ("deduplicate") those transactions from the package for the
>>>>>>>> purposes of package
>>>>>>>> mempool acceptance. If a package is empty after deduplication, we
>>>>>>>> do nothing.
>>>>>>>>
>>>>>>>> *Rationale*: Mempools vary across the network. It's possible for a
>>>>>>>> parent to be
>>>>>>>> accepted to the mempool of a peer on its own due to differences in
>>>>>>>> policy and
>>>>>>>> fee market fluctuations. We should not reject or penalize the
>>>>>>>> entire package for
>>>>>>>> an individual transaction as that could be a censorship vector.
>>>>>>>>
>>>>>>>> #### Packages Are Multi-Parent-1-Child
>>>>>>>>
>>>>>>>> Only packages of a specific topology are permitted. Namely, a
>>>>>>>> package is exactly
>>>>>>>> 1 child with all of its unconfirmed parents. After deduplication,
>>>>>>>> the package
>>>>>>>> may be exactly the same, empty, 1 child, 1 child with just some of
>>>>>>>> its
>>>>>>>> unconfirmed parents, etc. Note that it's possible for the parents
>>>>>>>> to be indirect
>>>>>>>> descendants/ancestors of one another, or for parent and child to
>>>>>>>> share a parent,
>>>>>>>> so we cannot make any other topology assumptions.
>>>>>>>>
>>>>>>>> *Rationale*: This allows for fee-bumping by CPFP. Allowing multiple
>>>>>>>> parents
>>>>>>>> makes it possible to fee-bump a batch of transactions. Restricting
>>>>>>>> packages to a
>>>>>>>> defined topology is also easier to reason about and simplifies the
>>>>>>>> validation
>>>>>>>> logic greatly. Multi-parent-1-child allows us to think of the
>>>>>>>> package as one big
>>>>>>>> transaction, where:
>>>>>>>>
>>>>>>>> - Inputs = all the inputs of parents + inputs of the child that
>>>>>>>> come from
>>>>>>>>   confirmed UTXOs
>>>>>>>> - Outputs = all the outputs of the child + all outputs of the
>>>>>>>> parents that
>>>>>>>>   aren't spent by other transactions in the package
>>>>>>>>
>>>>>>>> Examples of packages that follow this rule (variations of example A
>>>>>>>> show some
>>>>>>>> possibilities after deduplication): ![image][15]
>>>>>>>>
>>>>>>>> #### Fee-Related Checks Use Package Feerate
>>>>>>>>
>>>>>>>> Package Feerate = the total modified fees divided by the total
>>>>>>>> virtual size of
>>>>>>>> all transactions in the package.
>>>>>>>>
>>>>>>>> To meet the two feerate requirements of a mempool, i.e., the
>>>>>>>> pre-configured
>>>>>>>> minimum relay feerate (`minRelayTxFee`) and dynamic mempool minimum
>>>>>>>> feerate, the
>>>>>>>> total package feerate is used instead of the individual feerate.
>>>>>>>> The individual
>>>>>>>> transactions are allowed to be below feerate requirements if the
>>>>>>>> package meets
>>>>>>>> the feerate requirements. For example, the parent(s) in the package
>>>>>>>> can have 0
>>>>>>>> fees but be paid for by the child.
>>>>>>>>
>>>>>>>> *Rationale*: This can be thought of as "CPFP within a package,"
>>>>>>>> solving the
>>>>>>>> issue of a parent not meeting minimum fees on its own. This allows
>>>>>>>> L2
>>>>>>>> applications to adjust their fees at broadcast time instead of
>>>>>>>> overshooting or
>>>>>>>> risking getting stuck/pinned.
>>>>>>>>
>>>>>>>> We use the package feerate of the package *after deduplication*.
>>>>>>>>
>>>>>>>> *Rationale*:  It would be incorrect to use the fees of transactions
>>>>>>>> that are
>>>>>>>> already in the mempool, as we do not want a transaction's fees to be
>>>>>>>> double-counted for both its individual RBF and package RBF.
>>>>>>>>
>>>>>>>> Examples F and G [14] show the same package, but P1 is submitted
>>>>>>>> individually before
>>>>>>>> the package in example G. In example F, we can see that the 300vB
>>>>>>>> package pays
>>>>>>>> an additional 200sat in fees, which is not enough to pay for its
>>>>>>>> own bandwidth
>>>>>>>> (BIP125#4). In example G, we can see that P1 pays enough to replace
>>>>>>>> M1, but
>>>>>>>> using P1's fees again during package submission would make it look
>>>>>>>> like a 300sat
>>>>>>>> increase for a 200vB package. Even including its fees and size
>>>>>>>> would not be
>>>>>>>> sufficient in this example, since the 300sat looks like enough for
>>>>>>>> the 300vB
>>>>>>>> package. The calculcation after deduplication is 100sat increase
>>>>>>>> for a package
>>>>>>>> of size 200vB, which correctly fails BIP125#4. Assume all
>>>>>>>> transactions have a
>>>>>>>> size of 100vB.
>>>>>>>>
>>>>>>>> #### Package RBF
>>>>>>>>
>>>>>>>> If a package meets feerate requirements as a package, the parents
>>>>>>>> in the
>>>>>>>> transaction are allowed to replace-by-fee mempool transactions. The
>>>>>>>> child cannot
>>>>>>>> replace mempool transactions. Multiple transactions can replace the
>>>>>>>> same
>>>>>>>> transaction, but in order to be valid, none of the transactions can
>>>>>>>> try to
>>>>>>>> replace an ancestor of another transaction in the same package
>>>>>>>> (which would thus
>>>>>>>> make its inputs unavailable).
>>>>>>>>
>>>>>>>> *Rationale*: Even if we are using package feerate, a package will
>>>>>>>> not propagate
>>>>>>>> as intended if RBF still requires each individual transaction to
>>>>>>>> meet the
>>>>>>>> feerate requirements.
>>>>>>>>
>>>>>>>> We use a set of rules slightly modified from BIP125 as follows:
>>>>>>>>
>>>>>>>> ##### Signaling (Rule #1)
>>>>>>>>
>>>>>>>> All mempool transactions to be replaced must signal replaceability.
>>>>>>>>
>>>>>>>> *Rationale*: Package RBF signaling logic should be the same for
>>>>>>>> package RBF and
>>>>>>>> single transaction acceptance. This would be updated if single
>>>>>>>> transaction
>>>>>>>> validation moves to full RBF.
>>>>>>>>
>>>>>>>> ##### New Unconfirmed Inputs (Rule #2)
>>>>>>>>
>>>>>>>> A package may include new unconfirmed inputs, but the ancestor
>>>>>>>> feerate of the
>>>>>>>> child must be at least as high as the ancestor feerates of every
>>>>>>>> transaction
>>>>>>>> being replaced. This is contrary to BIP125#2, which states "The
>>>>>>>> replacement
>>>>>>>> transaction may only include an unconfirmed input if that input was
>>>>>>>> included in
>>>>>>>> one of the original transactions. (An unconfirmed input spends an
>>>>>>>> output from a
>>>>>>>> currently-unconfirmed transaction.)"
>>>>>>>>
>>>>>>>> *Rationale*: The purpose of BIP125#2 is to ensure that the
>>>>>>>> replacement
>>>>>>>> transaction has a higher ancestor score than the original
>>>>>>>> transaction(s) (see
>>>>>>>> [comment][13]). Example H [16] shows how adding a new unconfirmed
>>>>>>>> input can lower the
>>>>>>>> ancestor score of the replacement transaction. P1 is trying to
>>>>>>>> replace M1, and
>>>>>>>> spends an unconfirmed output of M2. P1 pays 800sat, M1 pays 600sat,
>>>>>>>> and M2 pays
>>>>>>>> 100sat. Assume all transactions have a size of 100vB. While, in
>>>>>>>> isolation, P1
>>>>>>>> looks like a better mining candidate than M1, it must be mined with
>>>>>>>> M2, so its
>>>>>>>> ancestor feerate is actually 4.5sat/vB.  This is lower than M1's
>>>>>>>> ancestor
>>>>>>>> feerate, which is 6sat/vB.
>>>>>>>>
>>>>>>>> In package RBF, the rule analogous to BIP125#2 would be "none of the
>>>>>>>> transactions in the package can spend new unconfirmed inputs."
>>>>>>>> Example J [17] shows
>>>>>>>> why, if any of the package transactions have ancestors, package
>>>>>>>> feerate is no
>>>>>>>> longer accurate. Even though M2 and M3 are not ancestors of P1
>>>>>>>> (which is the
>>>>>>>> replacement transaction in an RBF), we're actually interested in
>>>>>>>> the entire
>>>>>>>> package. A miner should mine M1 which is 5sat/vB instead of M2, M3,
>>>>>>>> P1, P2, and
>>>>>>>> P3, which is only 4sat/vB. The Package RBF rule cannot be loosened
>>>>>>>> to only allow
>>>>>>>> the child to have new unconfirmed inputs, either, because it can
>>>>>>>> still cause us
>>>>>>>> to overestimate the package's ancestor score.
>>>>>>>>
>>>>>>>> However, enforcing a rule analogous to BIP125#2 would not only make
>>>>>>>> Package RBF
>>>>>>>> less useful, but would also break Package RBF for packages with
>>>>>>>> parents already
>>>>>>>> in the mempool: if a package parent has already been submitted, it
>>>>>>>> would look
>>>>>>>> like the child is spending a "new" unconfirmed input. In example K
>>>>>>>> [18], we're
>>>>>>>> looking to replace M1 with the entire package including P1, P2, and
>>>>>>>> P3. We must
>>>>>>>> consider the case where one of the parents is already in the
>>>>>>>> mempool (in this
>>>>>>>> case, P2), which means we must allow P3 to have new unconfirmed
>>>>>>>> inputs. However,
>>>>>>>> M2 lowers the ancestor score of P3 to 4.3sat/vB, so we should not
>>>>>>>> replace M1
>>>>>>>> with this package.
>>>>>>>>
>>>>>>>> Thus, the package RBF rule regarding new unconfirmed inputs is less
>>>>>>>> strict than
>>>>>>>> BIP125#2. However, we still achieve the same goal of requiring the
>>>>>>>> replacement
>>>>>>>> transactions to have a ancestor score at least as high as the
>>>>>>>> original ones. As
>>>>>>>> a result, the entire package is required to be a higher feerate
>>>>>>>> mining candidate
>>>>>>>> than each of the replaced transactions.
>>>>>>>>
>>>>>>>> Another note: the [comment][13] above the BIP125#2 code in the
>>>>>>>> original RBF
>>>>>>>> implementation suggests that the rule was intended to be temporary.
>>>>>>>>
>>>>>>>> ##### Absolute Fee (Rule #3)
>>>>>>>>
>>>>>>>> The package must increase the absolute fee of the mempool, i.e. the
>>>>>>>> total fees
>>>>>>>> of the package must be higher than the absolute fees of the mempool
>>>>>>>> transactions
>>>>>>>> it replaces. Combined with the CPFP rule above, this differs from
>>>>>>>> BIP125 Rule #3
>>>>>>>> - an individual transaction in the package may have lower fees than
>>>>>>>> the
>>>>>>>>   transaction(s) it is replacing. In fact, it may have 0 fees, and
>>>>>>>> the child
>>>>>>>> pays for RBF.
>>>>>>>>
>>>>>>>> ##### Feerate (Rule #4)
>>>>>>>>
>>>>>>>> The package must pay for its own bandwidth; the package feerate
>>>>>>>> must be higher
>>>>>>>> than the replaced transactions by at least minimum relay feerate
>>>>>>>> (`incrementalRelayFee`). Combined with the CPFP rule above, this
>>>>>>>> differs from
>>>>>>>> BIP125 Rule #4 - an individual transaction in the package can have
>>>>>>>> a lower
>>>>>>>> feerate than the transaction(s) it is replacing. In fact, it may
>>>>>>>> have 0 fees,
>>>>>>>> and the child pays for RBF.
>>>>>>>>
>>>>>>>> ##### Total Number of Replaced Transactions (Rule #5)
>>>>>>>>
>>>>>>>> The package cannot replace more than 100 mempool transactions. This
>>>>>>>> is identical
>>>>>>>> to BIP125 Rule #5.
>>>>>>>>
>>>>>>>> ### Expected FAQs
>>>>>>>>
>>>>>>>> 1. Is it possible for only some of the package to make it into the
>>>>>>>> mempool?
>>>>>>>>
>>>>>>>>    Yes, it is. However, since we evict transactions from the
>>>>>>>> mempool by
>>>>>>>> descendant score and the package child is supposed to be sponsoring
>>>>>>>> the fees of
>>>>>>>> its parents, the most common scenario would be all-or-nothing. This
>>>>>>>> is
>>>>>>>> incentive-compatible. In fact, to be conservative, package
>>>>>>>> validation should
>>>>>>>> begin by trying to submit all of the transactions individually, and
>>>>>>>> only use the
>>>>>>>> package mempool acceptance logic if the parents fail due to low
>>>>>>>> feerate.
>>>>>>>>
>>>>>>>> 2. Should we allow packages to contain already-confirmed
>>>>>>>> transactions?
>>>>>>>>
>>>>>>>>     No, for practical reasons. In mempool validation, we actually
>>>>>>>> aren't able to
>>>>>>>> tell with 100% confidence if we are looking at a transaction that
>>>>>>>> has already
>>>>>>>> confirmed, because we look up inputs using a UTXO set. If we have
>>>>>>>> historical
>>>>>>>> block data, it's possible to look for it, but this is inefficient,
>>>>>>>> not always
>>>>>>>> possible for pruning nodes, and unnecessary because we're not going
>>>>>>>> to do
>>>>>>>> anything with the transaction anyway. As such, we already have the
>>>>>>>> expectation
>>>>>>>> that transaction relay is somewhat "stateful" i.e. nobody should be
>>>>>>>> relaying
>>>>>>>> transactions that have already been confirmed. Similarly, we
>>>>>>>> shouldn't be
>>>>>>>> relaying packages that contain already-confirmed transactions.
>>>>>>>>
>>>>>>>> [1]: https://github.com/bitcoin/bitcoin/pull/22290
>>>>>>>> [2]:
>>>>>>>> https://github.com/bitcoin/bips/blob/1f0b563738199ca60d32b4ba779797fc97d040fe/bip-0141.mediawiki#transaction-size-calculations
>>>>>>>> [3]:
>>>>>>>> https://github.com/bitcoin/bitcoin/blob/94f83534e4b771944af7d9ed0f40746f392eb75e/src/policy/policy.cpp#L282
>>>>>>>> [4]: https://github.com/bitcoin/bitcoin/pull/16400
>>>>>>>> [5]: https://github.com/bitcoin/bitcoin/pull/21062
>>>>>>>> [6]: https://github.com/bitcoin/bitcoin/pull/22675
>>>>>>>> [7]: https://github.com/bitcoin/bitcoin/pull/22796
>>>>>>>> [8]: https://github.com/bitcoin/bitcoin/pull/20833
>>>>>>>> [9]: https://github.com/bitcoin/bitcoin/pull/21800
>>>>>>>> [10]: https://github.com/bitcoin/bitcoin/pull/16401
>>>>>>>> [11]: https://github.com/bitcoin/bitcoin/pull/19621
>>>>>>>> [12]:
>>>>>>>> https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki
>>>>>>>> [13]:
>>>>>>>> https://github.com/bitcoin/bitcoin/pull/6871/files#diff-34d21af3c614ea3cee120df276c9c4ae95053830d7f1d3deaf009a4625409ad2R1101-R1104
>>>>>>>> [14]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/133567078-075a971c-0619-4339-9168-b41fd2b90c28.png
>>>>>>>> [15]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/132856734-fc17da75-f875-44bb-b954-cb7a1725cc0d.png
>>>>>>>> [16]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/133567347-a3e2e4a8-ae9c-49f8-abb9-81e8e0aba224.png
>>>>>>>> [17]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/133567370-21566d0e-36c8-4831-b1a8-706634540af3.png
>>>>>>>> [18]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/133567444-bfff1142-439f-4547-800a-2ba2b0242bcb.png
>>>>>>>> [19]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/133456219-0bb447cb-dcb4-4a31-b9c1-7d86205b68bc.png
>>>>>>>> [20]:
>>>>>>>> https://user-images.githubusercontent.com/25183001/132857787-7b7c6f56-af96-44c8-8d78-983719888c19.png
>>>>>>>> _______________________________________________
>>>>>>>> bitcoin-dev mailing list
>>>>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>>>>>>
>>>>>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>

[-- Attachment #2: Type: text/html, Size: 78450 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
  2021-09-23 15:36       ` Gloria Zhao
  2021-09-26 21:10         ` Antoine Riard
@ 2021-10-14 10:48         ` darosior
  1 sibling, 0 replies; 16+ messages in thread
From: darosior @ 2021-10-14 10:48 UTC (permalink / raw)
  To: Gloria Zhao, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 1609 bytes --]

Hi Gloria,

> In summary, it seems that the decisions that might still need attention/input from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or 1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with mempool transactions.

I would like to point out that package relay is not only useful in Lightning's adversarial scenarii but also for a better user experience of CPFP.
Take for instance a wallet managing coins it can only spend using pre-signed transactions. It may batch these coins into a single transaction, but only after broadcasting the pre-signed tx for each of these coins.
So for a 3 utxos it'd be:
coin1 -----> pres. tx1 ----- |
coin2 -----> pres. tx2 ----- | - - - spending transaction
coin3 -----> pres. tx3 ----- |

Now all these pre-signed transactions are pre-signed with a fixed feerate, which might be below the mempool minimum fee at the time of broadcast.
This is a usecase for multiple-parents-1-child packages. This is also something we do for Revault: you have pre-signed Unvault transactions, each have a CPFP output [0]. Since their confirmation is not security critical, you'd really want to batch the child-fee-paying tx.

Regarding 2. i did not come up with a reason for dropping this rule (yet?) since if you need to replace the child you can use individual submission, and if you need to replace the parent the child itself does not conflict anymore.

Thanks for the effort put into requesting feedback,
Antoine

[0] https://github.com/revault/practical-revault/blob/master/transactions.md#unvault_tx

[-- Attachment #2: Type: text/html, Size: 2155 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-10-14 10:49 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-16  7:51 [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF Gloria Zhao
2021-09-19 23:16 ` Antoine Riard
2021-09-20 15:10   ` Gloria Zhao
2021-09-23  4:29     ` Antoine Riard
2021-09-23 15:36       ` Gloria Zhao
2021-09-26 21:10         ` Antoine Riard
2021-09-27  7:15           ` Bastien TEINTURIER
2021-09-28 22:59             ` Antoine Riard
2021-09-29 11:56               ` Gloria Zhao
2021-10-14 10:48         ` darosior
2021-09-20  9:19 ` Bastien TEINTURIER
2021-09-21 11:18   ` Gloria Zhao
2021-09-21 15:18     ` Bastien TEINTURIER
2021-09-21 16:42       ` Gloria Zhao
2021-09-22  7:10         ` Bastien TEINTURIER
2021-09-22 13:26           ` Gloria Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox