> A) to not increase the workload of full-nodes

yes, this is critical

>  given the competitive nature of PoW itself

validating nodes do not compete with PoW, i think maybe you are not sure of the difference between a miner and a node

nodes do validation of transactions, they do this for free, and many of them provide essential services, like SPV validation for mobile

 
B) to not undermine L2 systems like LN.

yes, as a general rule, layered financial systems are vastly superior.  so that risks incurred by edge layers are not propagated fully to the inner layers.  For example L3 projects like TARO and RGB are building on lightning with less risk

On Wed, Oct 19, 2022 at 12:04 PM mm-studios <mm@mm-studios.com> wrote:
Thanks all for your responses.
so is it a no-go is because "reduced settlement speed is a desirable feature"?

I don';t know what weights more in this consideration:
A) to not increase the workload of full-nodes, being "less difficult to operate" and hence reduce the chance of some of them giving up which would lead to a negative centralization effect. (a bit cumbersome reasoning in my opinion, given the competitive nature of PoW itself, which introduce an accepted centralization, forcing some miners to give up). In this case the fact is accepted because is decentralized enough.
B) to not undermine L2 systems like LN.

in any case it is a major no-go reason, if there is not intention to speed up L1.
Thanks
M
------- Original Message -------
On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32.com> wrote:

> currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.

reduced settlement speed is a desirable feature and isn't something we need to fix

the focus should be on layer 2 protocols that allow the ability to hold & transfer, uncommitted transactions as pools / joins, so that layer 1's decentralization and incentives can remain undisturbed

protocols like mweb, for example




On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Hi Bitcoin devs,
I'd like to share an idea of a method to increase throughput in the bitcoin network.

Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.

Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.

The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.

The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.

The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.

Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.

Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).

This process is meant to happen every 10 minutes in average.

With this background information (we all already know) I go on to describe the idea:

Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.

Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.

At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.

Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).

Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.

This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.

If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.

Thank you.
I look forward to your responses.

--
Marcos Mayorga
https://twitter.com/KatlasC

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev