public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
From: Rusty Russell <rusty@rustcorp•com.au>
To: Gregory Maxwell <greg@xiph•org>,
	Bitcoin Protocol Discussion
	<bitcoin-dev@lists•linuxfoundation.org>
Subject: Re: [bitcoin-dev] Compact Block Relay BIP
Date: Wed, 11 May 2016 06:53:55 +0930	[thread overview]
Message-ID: <87k2j1zjx0.fsf@rustcorp.com.au> (raw)
In-Reply-To: <CAAS2fgRePSQ=-3MTR3p3U1zbd1ucfNg0_ocAegJCi4qR=XpypA@mail.gmail.com>

Gregory Maxwell <greg@xiph•org> writes:
> On Tue, May 10, 2016 at 5:28 AM, Rusty Russell via bitcoin-dev
> <bitcoin-dev@lists•linuxfoundation.org> wrote:
>> I used variable-length bit encodings, and used the shortest encoding
>> which is unique to you (including mempool).  It's a little more work,
>> but for an average node transmitting a block with 1300 txs and another
>> ~3000 in the mempool, you expect about 12 bits per transaction.  IOW,
>> about 1/5 of your current size.  Critically, we might be able to fit in
>> two or three TCP packets.
>
> Hm. 12 bits sounds very small even giving those figures. Why failure
> rate were you targeting?

That's a good question; I was assuming a best-case in which we have
mempool set reconciliation (handwave) thus know they are close.  But
there's also an alterior motive: any later more sophisticated approach
will want variable-length IDs, and I'd like Matt to do the work :)

In particular, you can significantly narrow the possibilities for a
block by sending the min-fee-per-kb and a list of "txs in my mempool
which didn't get in" and "txs which did despite not making the
fee-per-kb".  Those turn out to be tiny, and often make set
reconciliation trivial.  That's best done with variable-length IDs.

> (*Not interesting because it mostly reduces exposure to loss and the
> gods of TCP, but since those are the long poles in the latency tent,
> it's best to escape them entirely, see Matt's udp_wip branch.)

I'm not convinced on UDP; it always looks impressive, but then ends up
reimplementing TCP in practice.  We should be well within a TCP window
for these, so it's hard to see where we'd win.

>> I would also avoid the nonce to save recalculating for each node, and
>> instead define an id as:
>
> Doing this would greatly increase the cost of a collision though, as
> it would happen in many places in the network at once over the on the
> network at once, rather than just happening on a single link, thus
> hardly impacting overall propagation.

"Greatly increase"?  I don't see that.

Let's assume an attacker grinds out 10,000 txs with 128 bits of the same
TXID, and gets them all in a block.  They then win the lottery and get a
collision.  Now we have to transmit ~48 bytes more than expected.

> Using the same nonce means you also would not get a recovery gain from
> jointly decoding using compact blocks sent from multiple peers (which
> you'll have anyways in high bandwidth mode).

Not quite true, since if their mempools differ they'll use different
encoding lengths, but yes, you'll get less of this.

> With a nonce a sender does have the option of reusing what they got--
> but the actual encoding cost is negligible, for a 2500 transaction
> block its 27 microseconds (once per block, shared across all peers)
> using Pieter's suggestion of siphash 1-3 instead of the cheaper
> construct in the current draft.
>
> Of course, if you're going to check your whole mempool to reroll the
> nonce, thats another matter-- but that seems wasteful compared to just
> using a table driven size with a known negligible failure rate.

I'm not worried about the sender: The recipient needs to encode all the
mempool.

>> As Peter R points out, we could later enhance receiver to brute force
>> collisions (you could speed that by sending a XOR of all the txids, but
>> really if there are more than a few collisions, give up).
>
> The band between "no collisions" and "infeasible many" is fairly
> narrow.  You can add a small amount more space to the ids and
> immediately be in the no collision zone.

Indeed, I would be adding extra bits in the sender and not implementing
brute force in the receiver.  But I welcome someone else to do so.

Cheers,
Rusty.


  reply	other threads:[~2016-05-11  0:54 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-02 22:13 Matt Corallo
2016-05-03  5:02 ` Gregory Maxwell
2016-05-06  3:09   ` Matt Corallo
2016-05-08  0:40 ` Johnathan Corgan
2016-05-08  3:24   ` Matt Corallo
2016-05-09  9:35     ` Tom Zander
2016-05-09 10:43       ` Gregory Maxwell
2016-05-09 11:32         ` Tom
     [not found]           ` <CAAS2fgR01=SfpAdHhFd_DFa9VNiL=e1g4FiguVRywVVSqFe9rA@mail.gmail.com>
2016-05-09 12:12             ` [bitcoin-dev] Fwd: " Gregory Maxwell
2016-05-09 23:37               ` [bitcoin-dev] " Peter R
2016-05-10  1:42                 ` Peter R
2016-05-10  2:12                 ` Gregory Maxwell
2016-05-09 13:40           ` Peter Todd
2016-05-09 13:57             ` Tom
2016-05-09 14:04               ` Bryan Bishop
2016-05-09 17:06 ` Pieter Wuille
2016-05-09 18:34   ` Peter R
2016-05-10  5:28   ` Rusty Russell
2016-05-10 10:07     ` Gregory Maxwell
2016-05-10 21:23       ` Rusty Russell [this message]
2016-05-11  1:12         ` Matt Corallo
2016-05-18  1:49   ` Matt Corallo
2016-05-08 10:25 Nicolas Dorier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87k2j1zjx0.fsf@rustcorp.com.au \
    --to=rusty@rustcorp$(echo .)com.au \
    --cc=bitcoin-dev@lists$(echo .)linuxfoundation.org \
    --cc=greg@xiph$(echo .)org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox