The network protocol is not quite consensus critical, but it is important.

Two implementations of the decompressor might not be bug for bug compatible.  This (potentially) means that a block could be designed that won't decode properly for some version of the client but would work for another.  This would fork the network.

A "raw" network library is unlikely to have the same problem.

Rather than just compress the stream, you could compress only block messages only.  A new "cblock" message could be created that is a compressed block.  This shouldn't reduce efficiency by much.

If a client fails to decode a cblock, then it can ask for the block to be re-sent as a standard "block" message. 

This means that it is a pure performance improvement.  If problems occur, then the client can just switch back to uncompressed mode for that block.

You should look into the block relay system.  This gives a larger improvement than simply compressing the stream.  The main benefit is latency but it means that actual blocks don't have to be sent, so gives a potential 50% compression ratio.  Normally, a node receives all the transactions and then those transactions are included later in the block.



On Tue, Nov 10, 2015 at 5:40 AM, Johnathan Corgan via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
On Mon, Nov 9, 2015 at 5:58 PM, gladoscc via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
 
I think 25% bandwidth savings is certainly considerable, especially for people running full nodes in countries like Australia where internet bandwidth is lower and there are data caps.

​This reinforces the idea that such trade-off decisions should be be local and negotiated between peers, not a required feature of the network P2P.​
 

--
Johnathan Corgan
Corgan Labs - SDR Training and Development Services

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev