Hi Peter,

Answering your latest 2 emails.

> I do not consider CVE-2017-12842 to be serious. Indeed, I'm skeptical that we
> should even fix it with a fork. SPV validation is very sketchy, and the amount
> of work and money required to trigger CVE-2017-12842 is probably as or more
> expensive than simply creating fake blocks.

> Sergio's RSK Bridge contract being vulnerable to it just indicates it was a
> reckless design.

I don't think we shall disregard SPV validation yet in a world where we have
not really solved the scaling of Bitcoin payments for large range of user segments
running on low-cost android mobile with limited validation ressources. On the cost
of the attack, yes I think it's probably in the order of creating fake blocks at current
difficulty adjustment.

On appreciating if a design is reckless or not, it's always good to do it with a full-
fledged cost-based threat model in a comparative analysis w.r.t other alternative
design in my experience.

> To be clear, in this particular case I had specific, insider, knowledge that
> the relevant people had in fact seen my report and had already decided to
> dismiss it. This isn't a typical case where you're emailing some random company
> and don't have any contacts. I personally knew prior to publication that the
> relevant people had been given a fair chance to comment, had chosen not to, and
> I would likely receive no response at all.

Sure thing, it's not a disclosure configuration where the reporter has no out-of-band
redundant communication channels available with the software group of maintainers.
I can only suggest that Bitcoin Core's `SECURITY.md` to be modified in the sense to
give an acknowledgement of reception to finding reports with technical proofs under
~72 hours. I'll let external observers from the community make their own appreciation
on what this disclosure episode can say on the state of Bitcoin security problem handling.

> I'm not going to say anything further on how I knew this, because I'm not about
> to put up people who have been co-operating with me to the risk of harassment
> from people like Harding and others; I'm not very popular right now with many
> of the Bitcoin Core people working on the mempool code.

I think it's up to the maintainers or vendors of any piece of software to justify why
they're disregarding sound technical reports coming from a security researcher with
a credible and proven track record, especially when it's apparently for hidden social
reasons.

There is also the option to disclose under pseudonym which I personally already done
sometimes in the past. I can understand ones does not wish to do so far for professional
reputation reasons.

> Anyway, I think the lesson learned here is it's probably not worth bothering
> with a disclosure process at all for this type of issue. It just created a
> bunch of distracting political drama when simply publishing this exploit
> variation immediately probably would not have.

I've checked my own archive, on the Lightning-side and from my memory,
I can remember two far more serious issues than free-relay attacks which were
quickly disclosed without a formal process over the past years:
- time-dilation attacks by myself [0]
- RBF-pinning on second-stage HTLC by the TheBlueMatt [1]

Both were conducted on a less than 7-days timeframe between private report
to select developers parties and public full-disclosure. With more experience on
handling security issues since TDA initial report in 2019, I still think it's good to
give 2-weeks to any vendors if they wish to engage in a mitigation process (unless
special or emergency considerations).

In matters of ethical infosec and responsible disclosure, the process and timeframe
actually followed should matter far more than the "who" of the reporter, and her / his
"popularity" score on the social graph be completely disregarded - imho.

> If, for example, all Bitcoin nodes were somehow peered in a perfect ring, with
> each node having exactly two peers, the sum total bandwidth of using 2
> conflicting proof-of-UTXOs (aka spending the UTXO), would be almost identical
> to the sum total bandwidth of just using 1. The only additional bandwidth would
> be the three to four nodes at the "edges" of the ring who saw the two different
> conflicting versions.

> With higher #'s of peers, the total maximum extra bandwidth used broadcasting
> conflicts increases proportionally.

Yes, higher #'s of peers, higher the total maximum extra outgoing bandwidth used
for broadcasting conflicts increase proportionally. I think you can dissociate among
transaction-announcement bandwidth (e.g INV(wtxid)) and transaction-fetching 
bandwidth (e.g GETDATA(wtxid)), you can re-fine the adversarial scenario to have
highest DoS impact for each unique proof-of-UTXO. Like what is bandwidth-cost
carried on by announcer and bandwidth-cost encumbered by the receiver.

Best,
Antoine


Le jeudi 28 mars 2024 à 20:19:19 UTC, Peter Todd a écrit :
On Thu, Mar 28, 2024 at 07:13:38PM +0000, Antoine Riard wrote:
> > Modulo economic irrationalities with differently sized txs like the Rule
> #6
> > attack, the proof-of-UTXO is almost economically paid even when mempools
> are
> > partitioned because the bandwidth used by a given form of a transaction is
> > limited to the % of peers that relay it. Eg if I broadcast TxA to 50% of
> nodes,
> > and TxB to the other 50%, both spending the same txout, the total cost/KB
> used
> > in total would exactly the same... except that nodes have more than one
> peer.
> > This acts as an amplification fator to attacks depending on the exact
> topology
> > as bandwidth is wasted in proportion to the # of peers txs need to be
> broadcast
> > too. Basically, a fan-out factor.
>
> > If the # of peers is reduced, the impact of this type of attack is also
> > reduced. Of course, a balance has to be maintained.
>
> Sure, proof-of-UTXO is imperfectly economically charged, however I think
> you can
> re-use the same proof-of-UTXO for each subset of Sybilled transaction-relay
> peers.

Of course you can. That's the whole point of my scenario above: you can re-use
the proof-of-UTXO. But since nodes' mempools enforce anti-doublespending, the
tradeoff is less total nodes seeing each individual conflicting uses.

If, for example, all Bitcoin nodes were somehow peered in a perfect ring, with
each node having exactly two peers, the sum total bandwidth of using 2
conflicting proof-of-UTXOs (aka spending the UTXO), would be almost identical
to the sum total bandwidth of just using 1. The only additional bandwidth would
be the three to four nodes at the "edges" of the ring who saw the two different
conflicting versions.

With higher #'s of peers, the total maximum extra bandwidth used broadcasting
conflicts increases proportionally.

--
https://petertodd.org 'peter'[:-1]@petertodd.org

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/f1868012-8ad2-44ba-b83c-b53d5892b8e6n%40googlegroups.com.