The problem with this approach is that you need 100% exact behaviour for every node on the network in their decision to reject a particular block. So we need a 100% mempool synchronization across all nodes - otherwise just an attempted double spend could result in a fork in the network because some nodes saw it and some didn't. And actually, if we had 100% mempool synchronization, we wouldn't need a blockchain in the first place, because we could just use "first to enter mempool" as validity criterion.

On Wed, Jul 1, 2015 at 1:41 AM, Peter Grigor <peter@grigor.ws> wrote:
The block size debate centers around one concern it seems. To wit: if block size is increased malicious miners may publish unreasonably large "bloated" blocks. The way a miner would do this is to generate a plethora of private, non-propagated transactions and include these in the block they solve.

It seems to me that these bloated blocks could easily be detected by other miners and full nodes: they will contain a very high percentage of transactions that aren't found in the nodes' own memory pools. This signature can be exploited to allow nodes to reject these bloated blocks. The key here is that any malicious miner that publishes a block that is bloated with his own transactions would contain a ridiculous number of transactions that *absolutely no other full node has in its mempool*.

Simply put, a threshold would be set by nodes on the allowable number of non-mempool transactions allowed in a solved block (say, maybe, 50% -- I really don't know what it should be). If a block is published which contains more that this threshold of non-mempool transactions then it is rejected.

If this idea works the block size limitation could be completely removed.

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev