No, don't think so, the protocol is, essentially, relay transactions, when you get a block, send header, iterate over transactions, for each, either use two bytes for nth-recent-transaction-relayed, use 0xffff-3-byte-length-transaction-data. There are quite a few implementation details, and lots of things could be improved, but that is pretty much how it works. Matt On August 6, 2015 7:16:56 PM GMT+02:00, Sergio Demian Lerner via bitcoin-dev wrote: >Is there any up to date documentation about TheBlueMatt relay network >including what kind of block compression it is currently doing? (apart >from >the source code) > >Regards, Sergio. > >On Wed, Aug 5, 2015 at 7:14 PM, Gregory Maxwell via bitcoin-dev < >bitcoin-dev@lists.linuxfoundation.org> wrote: > >> On Wed, Aug 5, 2015 at 9:19 PM, Arnoud Kouwenhoven - Pukaki Corp >> wrote: >> > Thanks for this (direct) feedback. It would make sense that if >blocks >> can be >> > submitted using ~5kb packets, that no further optimizations would >be >> needed >> > at this point. I will look into the relay network transmission >protocol >> to >> > understand how it works! >> > >> > I hear that you are saying that this network solves speed of >transmission >> > and thereby (technical) block size issues. Presumably it would >solve >> speed >> > of block validation too by prevalidating transactions. >> >> >> Correct. Bitcoin Core has cached validation for many years now... if >> not for that and other optimizations, things would be really broken >> right now. :) >> >> > Assuming this is all >> > true, and I have no reason to doubt that at this point, I do not >> understand >> > why there is any discussion at all about the (technical) impact of >large >> > blocks, why there are large numbers of miners building on invalid >blocks >> > (SPV mining, https://bitcoin.org/en/alert/2015-07-04-spv-mining), >or why >> > there is any discussion about the speed of block validation (cpu >> processing >> > time to verify blocks and transactions in blocks being a >limitation). >> >> I'm also mystified by a lot of the large block discussion, much of it >> is completely divorced from the technology as deployed; much less >what >> we-- in industry-- know to be possible. I don't blame you or anyone >in >> particular on this; it's a new area and we don't yet know what we >need >> to know to know what we need to know; or to the extent that we do it >> hasn't had time to get effectively communicated. >> >> The technical/security implications of larger blocks are related to >> other things than propagation time, if you assume people are using >the >> available efficient relay protocol (or better). >> >> SPV mining is a bit of a misnomer (If I coined the term, I'm sorry). >> What these parties are actually doing is blinding mining on top of >> other pools' stratum work. You can think of it as sub-pooling with >> hopping onto whatever pool has the highest block (I'll call it VFSSP >> in this post-- validation free stratum subpooling). It's very easy >to >> implement, and there are other considerations. >> >> It was initially deployed at a time when a single pool in Europe has >> amassed more than half of the hashrate. This pool had propagation >> problems and a very high orphan rate, it may have (perhaps >> unintentionally) been performing a selfish mining attack; mining off >> their stratum work was an easy fix which massively cut down the >orphan >> rates for anyone who did it. This was before the relay network >> protocol existed (the fact that all the hashpower was consolidating >on >> a single pool was a major motivation for creating it). >> >> VFSSP also cuts through a number of practical issues miners have had: >> Miners that run their own bitcoin nodes in far away colocation >> (>100ms) due to local bandwidth or connectivity issues (censored >> internet); relay network hubs not being anywhere near by due to >> strange internet routing (e.g. japan to china going via the US for >... >> reasons...); the CreateNewBlock() function being very slow and >> unoptimized, etc. There are many other things like this-- and VFSSP >> avoids them causing delays even when you don't understand them or >know >> about them. So even when they're easily fixed the VFSSP is a more >> general workaround. >> >> Mining operations are also usually operated in a largely fire and >> forget manner. There is a long history in (esp pooled) mining where >> someone sets up an operation and then hardly maintains it after the >> fact... so some of the use of VFSSP appears to just be inertia-- we >> have better solutions now, but they they work to deploy and changing >> things involves risk (which is heightened by a lack of good >> monitoring-- participants learn they are too latent by observing >> orphaned blocks at a cost of 25 BTC each). >> >> One of the frustrating things about incentives in this space is that >> bad outcomes are possible even when they're not necessary. E.g. if a >> miner can lower their orphan rate by deploying a new protocol (or >> simply fixing some faulty hardware in their infrastructure, like >> Bitcoin nodes running on cheap VPSes with remote storage) OR they >can >> lower their orphan rate by pointing their hashpower at a free >> centeralized pool, they're likely to do the latter because it takes >> less effort. >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> > > >------------------------------------------------------------------------ > >_______________________________________________ >bitcoin-dev mailing list >bitcoin-dev@lists.linuxfoundation.org >https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev