public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] brickchain
@ 2022-10-19  9:04 mm-studios
  2022-10-19 13:40 ` angus
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: mm-studios @ 2022-10-19  9:04 UTC (permalink / raw)
  To: bitcoin-dev

[-- Attachment #1: Type: text/plain, Size: 3235 bytes --]

Hi Bitcoin devs,
I'd like to share an idea of a method to increase throughput in the bitcoin network.

Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.

Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.

The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.

The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.

The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.

Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.

Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).

This process is meant to happen every 10 minutes in average.

With this background information (we all already know) I go on to describe the idea:

Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.

Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.

At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.

Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).

Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.

This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.

If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.

Thank you.
I look forward to your responses.

--
Marcos Mayorgahttps://twitter.com/KatlasC

[-- Attachment #2: Type: text/html, Size: 4058 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19  9:04 [bitcoin-dev] brickchain mm-studios
@ 2022-10-19 13:40 ` angus
  2022-10-19 22:47   ` mm-studios
  2022-10-19 13:54 ` Bryan Bishop
  2022-10-19 14:24 ` Erik Aronesty
  2 siblings, 1 reply; 12+ messages in thread
From: angus @ 2022-10-19 13:40 UTC (permalink / raw)
  To: mm-studios, Bitcoin Protocol Discussion


[-- Attachment #1.1.1: Type: text/plain, Size: 6140 bytes --]



> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.


So, if I'm understanding right, this amounts to "reduce difficulty required for a block ('brick') to be valid if the mempool contains more than 1 block's worth of transactions so we get transactions confirmed faster" using 'bricks' as short-lived sidechains that get merged into blocks?

This would have the same fundamental problem as just making the max blocksize bigger - it increases the rate of growth of storage required for a full node, because you're allowing blocks/bricks to be created faster, so there will be more confirmed transactions to store in a given time window than under current Bitcoin rules.

Bitcoin doesn't take the size of the mempool into account when adjusting the difficulty because the time-between-blocks is 'more important' than avoiding congestion where transactions take ages to get into a block. The fee mechanism in part allows users to decide how urgently they want their tx to get confirmed, and high fees when there is congestion also disincentivises others from transacting at all, which helps arrest mempool growth.

I'd imagine we'd also see a 'highway widening' effect with this kind of proposal - if you increase the tx volume Bitcoin can settle in a given time, that will quickly be used up by more people transacting until we're back at a congested state again.

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.


How do we know if the hash the miner does find for a brick was their 'best effort' and they're not just being lazy? There's an element of luck in the best hash a miner can find, sometimes it takes a long time to meet the difficulty requirement and sometimes it happens almost at instantly.

How would we know how 'busy' the mempool was at the time a brick from months or years ago was mined?

Nodes have to be able to run through the entire history of the blockchain and check everything is valid. They have to do this using only the previous blocks they've already validated - they won't have historical snapshots of the mempool (they'll build and mutate a UTXO set, but that's different). Transactions don't contain a 'created-at' time that you could compare to the block's creation time (and if they did, you probably couldn't trust it).

With the current system, Nodes can calculate what the difficulty should be for every block based on those previous blocks' times and difficulties - but how would you know an old brick was valid if its difficulty was low but at the time the mempool was busy, vs. getting a fraudulent brick that is actually invalid because there isn't enough work in it? You can't solve this by adding some mempoolsize field to bricks, as you'd have to blindly trust miners not to lie about them.

If we can't be (fairly) certain that a miner put a minimum amount of work into finding a hash, then you lose all the strengths of PoW.

If you weaken the difficulty requirement which is there so that mining blocks is hard so that it is very hard to intentionally fork the chain, re-mine previous blocks, overtake the other fork, and get the network to re-org onto your chain - then there's no Proof of work undergirding consensus in the ledger's state.

Secondly, where does the block reward go? Do brick miners get a fraction of the reward proportionate to the fraction of the difficulty they got to? Later when bricks become part of a block, who gets the block reward for that complete block? Who gets the fees? No miner is going to bother mining a merge-bricks-into-block block if the reward isn't the same or better than just mining a regular block, but each miner of the bricks in it would also want a reward. But, we can't give them both a block reward as that'd increase Bitcoin's issuance rate, which might be the only thing people are more strongly opposed to than increasing the blocksize! xD

> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
> 

> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
> 

> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.


But the brick sidechain has to become part of the main blockchain - and as you've got N bricks in the time that there should be 1 block, and each brick is a full block, it feels like this is just a convoluted way to increase the blocksize? Every transaction has to be in the ledger somewhere to be confirmed, so even if the block itself is small and stored references to the bricks, Nodes are going to have to use storage to keep all those full bricks.

It also seems that you'd have to require the bricks sidechain to always be merged into the next actual block - it wouldn't work if the brick chain could keep growing and at the same time the actual blockchain advances (because there'd be risks of double-spends where one tx is in the brick chain and the other in the new block). Which I think further makes this feel like a roundabout way of increasing the blocksize

Despite my critique, this was interesting to think about - and hopefully this is useful (and hopefully I've not seriously misunderstood or said something dumb)

Angus

[-- Attachment #1.1.2.1: Type: text/html, Size: 6847 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 249 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19  9:04 [bitcoin-dev] brickchain mm-studios
  2022-10-19 13:40 ` angus
@ 2022-10-19 13:54 ` Bryan Bishop
  2022-10-19 14:24 ` Erik Aronesty
  2 siblings, 0 replies; 12+ messages in thread
From: Bryan Bishop @ 2022-10-19 13:54 UTC (permalink / raw)
  To: mm-studios, Bitcoin Protocol Discussion, Bryan Bishop

[-- Attachment #1: Type: text/plain, Size: 425 bytes --]

Hi,

On Wed, Oct 19, 2022 at 6:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:

> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>

Check out the previous "weak block" proposals:
https://diyhpl.us/~bryan/irc/bitcoin/weak-blocks-links.2016-05-09.txt

- Bryan
https://twitter.com/kanzure

[-- Attachment #2: Type: text/html, Size: 1054 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19  9:04 [bitcoin-dev] brickchain mm-studios
  2022-10-19 13:40 ` angus
  2022-10-19 13:54 ` Bryan Bishop
@ 2022-10-19 14:24 ` Erik Aronesty
  2022-10-19 16:03   ` mm-studios
  2 siblings, 1 reply; 12+ messages in thread
From: Erik Aronesty @ 2022-10-19 14:24 UTC (permalink / raw)
  To: mm-studios, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 4149 bytes --]

> currently, a miner produce blocks with a limited capacity of transactions
that ultimately limits the global settlement throughput to a reduced number
of tx/s.

reduced settlement speed is a desirable feature and isn't something we need
to fix

the focus should be on layer 2 protocols that allow the ability to hold &
transfer, uncommitted transactions as pools / joins, so that layer 1's
decentralization and incentives can remain undisturbed

protocols like mweb, for example




On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:

> Hi Bitcoin devs,
> I'd like to share an idea of a method to increase throughput in the
> bitcoin network.
>
> Currently, a miner produce blocks with a limited capacity of transactions
> that ultimately limits the global settlement throughput to a reduced number
> of tx/s.
>
> Big-blockers proposed the removal of limits but this didn't come with
> undesirable effects that have been widely discussed and rejected.
>
> The main feature we wanted to preserve is 'small blocks', providing
> 'better network effects' I won't focus on them.
>
> The problem with small blocks is that, once a block is filled
> transactions, they are kept back in the mempool, waiting for their turn in
> future blocks.
>
> The following changes in the protocol aim to let all transactions go in
> the current block, while keeping the block size small. It requires changes
> in the PoW algorithm.
>
> Currently, the PoW algorithm consists on finding a valid hash for the
> block. Its validity is determined by comparing the numeric value of the
> block hash with a protocol-defined value difficulty.
>
> Once a miner finds a nonce for the block that satisfies the condition the
> new block becomes valid and can be propagated. All nodes would update their
> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
> for clarity).
>
> This process is meant to happen every 10 minutes in average.
>
> With this background information (we all already know) I go on to describe
> the idea:
>
> Let's allow a miner to include transactions until the block is filled,
> let's call this structure (coining a new term 'Brick'), B0. [brick=block
> that doesn't meet the difficulty rule and is filled of tx to its full
> capacity]
> Since PoW hashing is continuously active, Brick B0 would have a nonce
> corresponding to a minimum numeric value of its hash found until it got
> filled.
>
> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
> would be broadcasted and nodes would have it on in a separate fork as usual.
>
> At this point, instead of discarding transactions, our miner would start
> working on a new brick B1, linked with B0 as usual.
>
> Nodes would allow incoming regular blocks and bricks with hashes that
> don't satisfy the difficulty rule, provided the brick is fully filled of
> transactions. Bricks not fully filled would be rejected as invalid to
> prevent spam (except if constitutes the last brick of a brickchain,
> explained below).
>
> Let's assume that 10 minutes have elapsed and our miner is in a state
> where N bricks have been produced and the accumulated PoW calculated using
> mathematics (every brick contains a 'minimum hash found', when a series of
> 'minimum hashes' is computationally equivalent to the network difficulty is
> then the full 'brickchain' is valid as a Block.
>
> This calculus shall be better defined, but I hope that this idea can serve
> as a seed to a BIP, or otherwise deemed absurd, which might be possible and
> I'd be delighted to discover why a scheme like this wouldn't work.
>
> If it finally worked, it could completely flush mempools, keep
> transactions fees low and increase throughput without an increase in the
> block size that would raise other concerns related to propagation.
>
> Thank you.
> I look forward to your responses.
>
> --
> Marcos Mayorga
> https://twitter.com/KatlasC
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 5907 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19 14:24 ` Erik Aronesty
@ 2022-10-19 16:03   ` mm-studios
  2022-10-19 21:34     ` G. Andrew Stone
  2022-11-08 14:16     ` Erik Aronesty
  0 siblings, 2 replies; 12+ messages in thread
From: mm-studios @ 2022-10-19 16:03 UTC (permalink / raw)
  To: Erik Aronesty; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 4965 bytes --]

Thanks all for your responses.
so is it a no-go is because "reduced settlement speed is a desirable feature"?

I don';t know what weights more in this consideration:
A) to not increase the workload of full-nodes, being "less difficult to operate" and hence reduce the chance of some of them giving up which would lead to a negative centralization effect. (a bit cumbersome reasoning in my opinion, given the competitive nature of PoW itself, which introduce an accepted centralization, forcing some miners to give up). In this case the fact is accepted because is decentralized enough.
B) to not undermine L2 systems like LN.

in any case it is a major no-go reason, if there is not intention to speed up L1.
Thanks
M

------- Original Message -------
On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com> wrote:

>> currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold & transfer, uncommitted transactions as pools / joins, so that layer 1's decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
>>
>> This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.
>>
>> If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.
>>
>> Thank you.
>> I look forward to your responses.
>>
>> --
>> Marcos Mayorgahttps://twitter.com/KatlasC
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 7401 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19 16:03   ` mm-studios
@ 2022-10-19 21:34     ` G. Andrew Stone
  2022-10-19 22:53       ` mm-studios
  2022-11-08 14:16     ` Erik Aronesty
  1 sibling, 1 reply; 12+ messages in thread
From: G. Andrew Stone @ 2022-10-19 21:34 UTC (permalink / raw)
  To: mm-studios, Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 6286 bytes --]

Consider that a miner can also produce transactions.  So every miner would
produce spam tx to fill their bricks at the minimum allowed difficulty to
reap the brick coinbase reward.

You might quickly respond with a modification that changes or eliminates
the brick coinbase reward, but perhaps that exact reward and the major
negative consequence of miners creating spam tx needs careful thought.

See "bobtail" for a weak block proposal that produces a more consistent
discovery time, and "tailstorm" for a proposal that uses the content of
those weak blocks as commitment to what transactions miners are working on
(which will allow more trustworthy (but still not foolproof) use of
transactions before confirmation)... neither of which have a snowball's
chance in hell (along with any other hard forking change) of being put into
bitcoin :-).

Andrew

On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev <
bitcoin-dev@lists•linuxfoundation.org> wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> ------- Original Message -------
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com>
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state
>> where N bricks have been produced and the accumulated PoW calculated using
>> mathematics (every brick contains a 'minimum hash found', when a series of
>> 'minimum hashes' is computationally equivalent to the network difficulty is
>> then the full 'brickchain' is valid as a Block.
>>
>> This calculus shall be better defined, but I hope that this idea can
>> serve as a seed to a BIP, or otherwise deemed absurd, which might be
>> possible and I'd be delighted to discover why a scheme like this wouldn't
>> work.
>>
>> If it finally worked, it could completely flush mempools, keep
>> transactions fees low and increase throughput without an increase in the
>> block size that would raise other concerns related to propagation.
>>
>> Thank you.
>> I look forward to your responses.
>>
>> --
>> Marcos Mayorga
>> https://twitter.com/KatlasC
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists•linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>

[-- Attachment #2: Type: text/html, Size: 9098 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19 13:40 ` angus
@ 2022-10-19 22:47   ` mm-studios
  0 siblings, 0 replies; 12+ messages in thread
From: mm-studios @ 2022-10-19 22:47 UTC (permalink / raw)
  To: angus; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 10227 bytes --]

------- Original Message -------
On Wednesday, October 19th, 2022 at 2:40 PM, angus <angus@toaster•cc> wrote:

>> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
>
> So, if I'm understanding right, this amounts to "reduce difficulty required for a block ('brick') to be valid if the mempool contains more than 1 block's worth of transactions so we get transactions confirmed faster" using 'bricks' as short-lived sidechains that get merged into blocks?

They wouldn't get confirmed faster.
Imagine a regular Big Block (BB) could be re-structured as a brickchain
BB = B0 <- B1 <- ... <- Bn (Block = chain of bricks)

Only B0 contains the coinbase transaction.

Bi are streamed from miner to nodes as they are produced.
The node creates a separate fork on B0 arrival, and on arrival of the last B1 treats the whole brickchain as they now treat a 1 Block: Either accept it or reject it as as a whole. (like is the complete block had just arrived entirely. (In reality it has arrived as a stream of bricks).
Before the brickchain is complete the node does nothing special, just validate each brick on arrival and wait for the next.

> This would have the same fundamental problem as just making the max blocksize bigger - it increases the rate of growth of storage required for a full node, because you're allowing blocks/bricks to be created faster, so there will be more confirmed transactions to store in a given time window than under current Bitcoin rules.

Yes, the data transmitted over the network is bigger, because we are intentionally increasing the throughput, instead of delaying tx in the mempool.
This is a potential howto in case there was an intention of speeding up L1.
The unavoidable price of speed in tx/s is bandwidth and volume of data to process.
The point is to do it without making bigger blocks.

> Bitcoin doesn't take the size of the mempool into account when adjusting the difficulty because the time-between-blocks is 'more important' than avoiding congestion where transactions take ages to get into a block. The fee mechanism in part allows users to decide how urgently they want their tx to get confirmed, and high fees when there is congestion also disincentivises others from transacting at all, which helps arrest mempool growth.

streaming bricks instead of delivering a big block can be considered a way of reducing congestion. This is valid at any scale.
E.g. 1 Mb block delivered at once every 10 minutes versus a stream of 10 100Kib-brick delivered 1 per minute

> I'd imagine we'd also see a 'highway widening' effect with this kind of proposal - if you increase the tx volume Bitcoin can settle in a given time, that will quickly be used up by more people transacting until we're back at a congested state again.
>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.

congestion can alweays happen with enough workload.
A system able to determine its workload can regulate it (keeping tx in the mempool to temporarily aliviate).

The congestion counter-measure remains in essence the same.

> How do we know if the hash the miner does find for a brick was their 'best effort' and they're not just being lazy? There's an element of luck in the best hash a miner can find, sometimes it takes a long time to meet the difficulty requirement and sometimes it happens almost at instantly.

a lazy miner will produce a longer brickchain because they would have greater hashes than a more powerful miner. A more competitive miner will deliver the complete brickchain faster and hence its half-way through brickchain will be discarded.
It is exactly like the current system and a lazy miner.

> How would we know how 'busy' the mempool was at the time a brick from months or years ago was mined?

I dont understand the question, but i guess it is the same answer replacing bricks for blocks

> Nodes have to be able to run through the entire history of the blockchain and check everything is valid. They have to do this using only the previous blocks they've already validated - they won't have historical snapshots of the mempool (they'll build and mutate a UTXO set, but that's different). Transactions don't contain a 'created-at' time that you could compare to the block's creation time (and if they did, you probably couldn't trust it).

Why does this question apply to the concept of bricks and not to the concept of block?

I see a resulting blockchain would be a chain of blocks and bricks:
Bi = Block at height i
bi = brick at height i

Chain of blocks and bricks:
B1 <- B2 < b3,1 <- b3,2 <- b3,3 <- B4 <- B5 <- b6,1 <- b6,2 <- B7 ...
--------------------------- -----------------
equivalent to B3 eq to B6

> With the current system, Nodes can calculate what the difficulty should be for every block based on those previous blocks' times and difficulties - but how would you know an old brick was valid if its difficulty was low but at the time the mempool was busy, vs. getting a fraudulent brick that is actually invalid because there isn't enough work in it? You can't solve this by adding some mempoolsize field to bricks, as you'd have to blindly trust miners not to lie about them.

The moment a non fully brick arrives to a node it is considered a complete brickchain and would be treated as a block.

you start working on a brick it can't be filled, you find the mempool empty or want to send it for any reason. It constitutes a valid brickchain because it has 1 brick with only the last brick not completely filled, the rest of previous bricks (0 in this corner case) are fully filled.

> If we can't be (fairly) certain that a miner put a minimum amount of work into finding a hash, then you lose all the strengths of PoW.

The PoW is still working the same because al the brickchain is accepted or rejected by a miner in an atomic way (i.e. as a block, with an aggregated PoW that must beat other possible blocks to stay in the longest chain).

> If you weaken the difficulty requirement which is there so that mining blocks is hard so that it is very hard to intentionally fork the chain, re-mine previous blocks, overtake the other fork, and get the network to re-org onto your chain - then there's no Proof of work undergirding consensus in the ledger's state.

if the brickchain is treated as an indivisible structure once is fully received there is no difference with the strengtt of blocks.
The technique doesn't weaken, it spreads in time, but every 10 minutes the amount of energy invested in the whole brickchain is what counts.

> Secondly, where does the block reward go? Do brick miners get a fraction of the reward proportionate to the fraction of the difficulty they got to? Later when bricks become part of a block, who gets the block reward for that complete block? Who gets the fees? No miner is going to bother mining a merge-bricks-into-block block if the reward isn't the same or better than just mining a regular block, but each miner of the bricks in it would also want a reward. But, we can't give them both a block reward as that'd increase Bitcoin's issuance rate, which might be the only thing people are more strongly opposed to than increasing the blocksize! xD

the coinbase tx can go in the first brick. (only in one brick of the brickchain)

Since 1 brickchain it is treated as 1 block, the coinbase tx can be the first one in the first brick.

reward only happens when the block is buried in the blockchain.
In the same way, reward happens when the complete brickchain is buried in the block/brickchain.

>> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
>
> But the brick sidechain has to become part of the main blockchain - and as you've got N bricks in the time that there should be 1 block, and each brick is a full block, it feels like this is just a convoluted way to increase the blocksize? Every transaction has to be in the ledger somewhere to be confirmed, so even if the block itself is small and stored references to the bricks, Nodes are going to have to use storage to keep all those full bricks.

except that bigblocks are, as i mentioned earlier, sent at once.

while this model is more like streaming a block (spread the data in time)

> It also seems that you'd have to require the bricks sidechain to always be merged into the next actual block - it wouldn't work if the brick chain could keep growing and at the same time the actual blockchain advances (because there'd be risks of double-spends where one tx is in the brick chain and the other in the new block). Which I think further makes this feel like a roundabout way of increasing the blocksize

not merged,

the brickchain would be appended to the previous (block or brickchain) of the main chain (I don't know how to call it now lol)

> Despite my critique, this was interesting to think about - and hopefully this is useful (and hopefully I've not seriously misunderstood or said something dumb)

Thanks for your considerations. I am defending the idea in real time.
perhaps in one of those I am caught in a impossibility, until then, let's try to find the catch, it it exists. : )

Marcos

> Angus

[-- Attachment #2: Type: text/html, Size: 16782 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19 21:34     ` G. Andrew Stone
@ 2022-10-19 22:53       ` mm-studios
  0 siblings, 0 replies; 12+ messages in thread
From: mm-studios @ 2022-10-19 22:53 UTC (permalink / raw)
  To: G. Andrew Stone; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 6995 bytes --]

------- Original Message -------
On Wednesday, October 19th, 2022 at 10:34 PM, G. Andrew Stone <g.andrew.stone@gmail•com> wrote:

> Consider that a miner can also produce transactions. So every miner would produce spam tx to fill their bricks at the minimum allowed difficulty to reap the brick coinbase reward.

except that, as I explained in a prev email, bricks don't contain reward. They are meaningless unless they form a complete brickchain with an accumulated difficulty that is equivalent to current block difficulty.

> You might quickly respond with a modification that changes or eliminates the brick coinbase reward, but perhaps that exact reward and the major negative consequence of miners creating spam tx needs careful thought.

since 1 block is equivalent to a brickchain, there exist only 1 coinbase tx
and since the brickchain is treated atomically as a whole, it follows the same processing as a block.
The only observable difference (and the reason of augmentating throughput) in the wire is that the information has been transmitted in streaming (decomposed block spaced in time)

> See "bobtail" for a weak block proposal that produces a more consistent discovery time, and "tailstorm" for a proposal that uses the content of those weak blocks as commitment to what transactions miners are working on (which will allow more trustworthy (but still not foolproof) use of transactions before confirmation)... neither of which have a snowball's chance in hell (along with any other hard forking change) of being put into bitcoin :-).

thanks
Marcos

> Andrew
>
> On Wed, Oct 19, 2022 at 12:05 PM mm-studios via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to operate" and hence reduce the chance of some of them giving up which would lead to a negative centralization effect. (a bit cumbersome reasoning in my opinion, given the competitive nature of PoW itself, which introduce an accepted centralization, forcing some miners to give up). In this case the fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed up L1.
>> Thanks
>> M
>>
>> ------- Original Message -------
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com> wrote:
>>
>>>> currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & transfer, uncommitted transactions as pools / joins, so that layer 1's decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>
>>>> Hi Bitcoin devs,
>>>> I'd like to share an idea of a method to increase throughput in the bitcoin network.
>>>>
>>>> Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>>
>>>> Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.
>>>>
>>>> The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.
>>>>
>>>> The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.
>>>>
>>>> The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.
>>>>
>>>> Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.
>>>>
>>>> Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).
>>>>
>>>> This process is meant to happen every 10 minutes in average.
>>>>
>>>> With this background information (we all already know) I go on to describe the idea:
>>>>
>>>> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
>>>> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
>>>>
>>>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
>>>>
>>>> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>>>>
>>>> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>>>>
>>>> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
>>>>
>>>> This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.
>>>>
>>>> If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.
>>>>
>>>> Thank you.
>>>> I look forward to your responses.
>>>>
>>>> --
>>>> Marcos Mayorgahttps://twitter.com/KatlasC
>>>>
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 11449 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-10-19 16:03   ` mm-studios
  2022-10-19 21:34     ` G. Andrew Stone
@ 2022-11-08 14:16     ` Erik Aronesty
  2022-11-08 14:25       ` mm-studios
  1 sibling, 1 reply; 12+ messages in thread
From: Erik Aronesty @ 2022-11-08 14:16 UTC (permalink / raw)
  To: mm-studios; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 5870 bytes --]

> A) to not increase the workload of full-nodes

yes, this is critical

>  given the competitive nature of PoW itself

validating nodes do not compete with PoW, i think maybe you are not sure of
the difference between a miner and a node

nodes do validation of transactions, they do this for free, and many of
them provide essential services, like SPV validation for mobile


B) to not undermine L2 systems like LN.

yes, as a general rule, layered financial systems are vastly superior.  so
that risks incurred by edge layers are not propagated fully to the inner
layers.  For example L3 projects like TARO and RGB are building on
lightning with less risk

On Wed, Oct 19, 2022 at 12:04 PM mm-studios <mm@mm-studios•com> wrote:

> Thanks all for your responses.
> so is it a no-go is because "reduced settlement speed is a desirable
> feature"?
>
> I don';t know what weights more in this consideration:
> A) to not increase the workload of full-nodes, being "less difficult to
> operate" and hence reduce the chance of some of them giving up which would
> lead to a negative centralization effect. (a bit cumbersome reasoning in my
> opinion, given the competitive nature of PoW itself, which introduce an
> accepted centralization, forcing some miners to give up). In this case the
> fact is accepted because is decentralized enough.
> B) to not undermine L2 systems like LN.
>
> in any case it is a major no-go reason, if there is not intention to speed
> up L1.
> Thanks
> M
> ------- Original Message -------
> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com>
> wrote:
>
> > currently, a miner produce blocks with a limited capacity of
> transactions that ultimately limits the global settlement throughput to a
> reduced number of tx/s.
>
> reduced settlement speed is a desirable feature and isn't something we
> need to fix
>
> the focus should be on layer 2 protocols that allow the ability to hold &
> transfer, uncommitted transactions as pools / joins, so that layer 1's
> decentralization and incentives can remain undisturbed
>
> protocols like mweb, for example
>
>
>
>
> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
> bitcoin-dev@lists•linuxfoundation.org> wrote:
>
>> Hi Bitcoin devs,
>> I'd like to share an idea of a method to increase throughput in the
>> bitcoin network.
>>
>> Currently, a miner produce blocks with a limited capacity of transactions
>> that ultimately limits the global settlement throughput to a reduced number
>> of tx/s.
>>
>> Big-blockers proposed the removal of limits but this didn't come with
>> undesirable effects that have been widely discussed and rejected.
>>
>> The main feature we wanted to preserve is 'small blocks', providing
>> 'better network effects' I won't focus on them.
>>
>> The problem with small blocks is that, once a block is filled
>> transactions, they are kept back in the mempool, waiting for their turn in
>> future blocks.
>>
>> The following changes in the protocol aim to let all transactions go in
>> the current block, while keeping the block size small. It requires changes
>> in the PoW algorithm.
>>
>> Currently, the PoW algorithm consists on finding a valid hash for the
>> block. Its validity is determined by comparing the numeric value of the
>> block hash with a protocol-defined value difficulty.
>>
>> Once a miner finds a nonce for the block that satisfies the condition the
>> new block becomes valid and can be propagated. All nodes would update their
>> blockchains with it. (assuming no conflict resolution (orphan blocks, ...)
>> for clarity).
>>
>> This process is meant to happen every 10 minutes in average.
>>
>> With this background information (we all already know) I go on to
>> describe the idea:
>>
>> Let's allow a miner to include transactions until the block is filled,
>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>> that doesn't meet the difficulty rule and is filled of tx to its full
>> capacity]
>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>> corresponding to a minimum numeric value of its hash found until it got
>> filled.
>>
>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule,
>> would be broadcasted and nodes would have it on in a separate fork as usual.
>>
>> At this point, instead of discarding transactions, our miner would start
>> working on a new brick B1, linked with B0 as usual.
>>
>> Nodes would allow incoming regular blocks and bricks with hashes that
>> don't satisfy the difficulty rule, provided the brick is fully filled of
>> transactions. Bricks not fully filled would be rejected as invalid to
>> prevent spam (except if constitutes the last brick of a brickchain,
>> explained below).
>>
>> Let's assume that 10 minutes have elapsed and our miner is in a state
>> where N bricks have been produced and the accumulated PoW calculated using
>> mathematics (every brick contains a 'minimum hash found', when a series of
>> 'minimum hashes' is computationally equivalent to the network difficulty is
>> then the full 'brickchain' is valid as a Block.
>>
>> This calculus shall be better defined, but I hope that this idea can
>> serve as a seed to a BIP, or otherwise deemed absurd, which might be
>> possible and I'd be delighted to discover why a scheme like this wouldn't
>> work.
>>
>> If it finally worked, it could completely flush mempools, keep
>> transactions fees low and increase throughput without an increase in the
>> block size that would raise other concerns related to propagation.
>>
>> Thank you.
>> I look forward to your responses.
>>
>> --
>> Marcos Mayorga
>> https://twitter.com/KatlasC
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists•linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>

[-- Attachment #2: Type: text/html, Size: 9156 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-11-08 14:16     ` Erik Aronesty
@ 2022-11-08 14:25       ` mm-studios
  2022-11-08 15:49         ` Erik Aronesty
  0 siblings, 1 reply; 12+ messages in thread
From: mm-studios @ 2022-11-08 14:25 UTC (permalink / raw)
  To: Erik Aronesty; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 6145 bytes --]

------- Original Message -------
On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty <erik@q32•com> wrote:

>> A) to not increase the workload of full-nodes
>
> yes, this is critical
>
>> given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure of the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of them provide essential services, like SPV validation for mobile

I think it's pretty clear that the "competitive nature of PoW" is not referring to verification nodes (satoshi preferred this other word).

> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so that risks incurred by edge layers are not propagated fully to the inner layers. For example L3 projects like TARO and RGB are building on lightning with less risk

layers also add fees to users

> On Wed, Oct 19, 2022 at 12:04 PM mm-studios <mm@mm-studios•com> wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to operate" and hence reduce the chance of some of them giving up which would lead to a negative centralization effect. (a bit cumbersome reasoning in my opinion, given the competitive nature of PoW itself, which introduce an accepted centralization, forcing some miners to give up). In this case the fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to speed up L1.
>> Thanks
>> M
>>
>> ------- Original Message -------
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com> wrote:
>>
>>>> currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>
>>> reduced settlement speed is a desirable feature and isn't something we need to fix
>>>
>>> the focus should be on layer 2 protocols that allow the ability to hold & transfer, uncommitted transactions as pools / joins, so that layer 1's decentralization and incentives can remain undisturbed
>>>
>>> protocols like mweb, for example
>>>
>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>
>>>> Hi Bitcoin devs,
>>>> I'd like to share an idea of a method to increase throughput in the bitcoin network.
>>>>
>>>> Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>>
>>>> Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.
>>>>
>>>> The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.
>>>>
>>>> The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.
>>>>
>>>> The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.
>>>>
>>>> Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.
>>>>
>>>> Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).
>>>>
>>>> This process is meant to happen every 10 minutes in average.
>>>>
>>>> With this background information (we all already know) I go on to describe the idea:
>>>>
>>>> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
>>>> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
>>>>
>>>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
>>>>
>>>> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>>>>
>>>> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>>>>
>>>> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
>>>>
>>>> This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.
>>>>
>>>> If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.
>>>>
>>>> Thank you.
>>>> I look forward to your responses.
>>>>
>>>> --
>>>> Marcos Mayorgahttps://twitter.com/KatlasC
>>>>
>>>> _______________________________________________
>>>> bitcoin-dev mailing list
>>>> bitcoin-dev@lists•linuxfoundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 10328 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-11-08 14:25       ` mm-studios
@ 2022-11-08 15:49         ` Erik Aronesty
  2022-11-08 16:31           ` mm-studios
  0 siblings, 1 reply; 12+ messages in thread
From: Erik Aronesty @ 2022-11-08 15:49 UTC (permalink / raw)
  To: mm-studios; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 6777 bytes --]

> I think it's pretty clear that the "competitive nature of PoW" is not
referring to verification nodes

cool, so we can agree there is no accepted centralization pressure for
validating nodes then

> layers also add fees to users

source?  i feel like it's obvious that the tree-like efficiencies should
reduce fees, but i'd appreciate your research on that topic


On Tue, Nov 8, 2022 at 9:25 AM mm-studios <mm@mm-studios•com> wrote:

>
> ------- Original Message -------
> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty <erik@q32•com>
> wrote:
>
> > A) to not increase the workload of full-nodes
>
> yes, this is critical
>
> > given the competitive nature of PoW itself
>
> validating nodes do not compete with PoW, i think maybe you are not sure
> of the difference between a miner and a node
>
> nodes do validation of transactions, they do this for free, and many of
> them provide essential services, like SPV validation for mobile
>
>
>
> I think it's pretty clear that the "competitive nature of PoW" is not
> referring to verification nodes (satoshi preferred this other word).
>
> B) to not undermine L2 systems like LN.
>
> yes, as a general rule, layered financial systems are vastly superior. so
> that risks incurred by edge layers are not propagated fully to the inner
> layers. For example L3 projects like TARO and RGB are building on lightning
> with less risk
>
>
> layers also add fees to users
>
>
> On Wed, Oct 19, 2022 at 12:04 PM mm-studios <mm@mm-studios•com> wrote:
>
>> Thanks all for your responses.
>> so is it a no-go is because "reduced settlement speed is a desirable
>> feature"?
>>
>> I don';t know what weights more in this consideration:
>> A) to not increase the workload of full-nodes, being "less difficult to
>> operate" and hence reduce the chance of some of them giving up which would
>> lead to a negative centralization effect. (a bit cumbersome reasoning in my
>> opinion, given the competitive nature of PoW itself, which introduce an
>> accepted centralization, forcing some miners to give up). In this case the
>> fact is accepted because is decentralized enough.
>> B) to not undermine L2 systems like LN.
>>
>> in any case it is a major no-go reason, if there is not intention to
>> speed up L1.
>> Thanks
>> M
>> ------- Original Message -------
>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com>
>> wrote:
>>
>> > currently, a miner produce blocks with a limited capacity of
>> transactions that ultimately limits the global settlement throughput to a
>> reduced number of tx/s.
>>
>> reduced settlement speed is a desirable feature and isn't something we
>> need to fix
>>
>> the focus should be on layer 2 protocols that allow the ability to hold &
>> transfer, uncommitted transactions as pools / joins, so that layer 1's
>> decentralization and incentives can remain undisturbed
>>
>> protocols like mweb, for example
>>
>>
>>
>>
>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <
>> bitcoin-dev@lists•linuxfoundation.org> wrote:
>>
>>> Hi Bitcoin devs,
>>> I'd like to share an idea of a method to increase throughput in the
>>> bitcoin network.
>>>
>>> Currently, a miner produce blocks with a limited capacity of
>>> transactions that ultimately limits the global settlement throughput to a
>>> reduced number of tx/s.
>>>
>>> Big-blockers proposed the removal of limits but this didn't come with
>>> undesirable effects that have been widely discussed and rejected.
>>>
>>> The main feature we wanted to preserve is 'small blocks', providing
>>> 'better network effects' I won't focus on them.
>>>
>>> The problem with small blocks is that, once a block is filled
>>> transactions, they are kept back in the mempool, waiting for their turn in
>>> future blocks.
>>>
>>> The following changes in the protocol aim to let all transactions go in
>>> the current block, while keeping the block size small. It requires changes
>>> in the PoW algorithm.
>>>
>>> Currently, the PoW algorithm consists on finding a valid hash for the
>>> block. Its validity is determined by comparing the numeric value of the
>>> block hash with a protocol-defined value difficulty.
>>>
>>> Once a miner finds a nonce for the block that satisfies the condition
>>> the new block becomes valid and can be propagated. All nodes would update
>>> their blockchains with it. (assuming no conflict resolution (orphan blocks,
>>> ...) for clarity).
>>>
>>> This process is meant to happen every 10 minutes in average.
>>>
>>> With this background information (we all already know) I go on to
>>> describe the idea:
>>>
>>> Let's allow a miner to include transactions until the block is filled,
>>> let's call this structure (coining a new term 'Brick'), B0. [brick=block
>>> that doesn't meet the difficulty rule and is filled of tx to its full
>>> capacity]
>>> Since PoW hashing is continuously active, Brick B0 would have a nonce
>>> corresponding to a minimum numeric value of its hash found until it got
>>> filled.
>>>
>>> Fully filled brick B0, with a hash that doesn't meet the difficulty
>>> rule, would be broadcasted and nodes would have it on in a separate fork as
>>> usual.
>>>
>>> At this point, instead of discarding transactions, our miner would start
>>> working on a new brick B1, linked with B0 as usual.
>>>
>>> Nodes would allow incoming regular blocks and bricks with hashes that
>>> don't satisfy the difficulty rule, provided the brick is fully filled of
>>> transactions. Bricks not fully filled would be rejected as invalid to
>>> prevent spam (except if constitutes the last brick of a brickchain,
>>> explained below).
>>>
>>> Let's assume that 10 minutes have elapsed and our miner is in a state
>>> where N bricks have been produced and the accumulated PoW calculated using
>>> mathematics (every brick contains a 'minimum hash found', when a series of
>>> 'minimum hashes' is computationally equivalent to the network difficulty is
>>> then the full 'brickchain' is valid as a Block.
>>>
>>> This calculus shall be better defined, but I hope that this idea can
>>> serve as a seed to a BIP, or otherwise deemed absurd, which might be
>>> possible and I'd be delighted to discover why a scheme like this wouldn't
>>> work.
>>>
>>> If it finally worked, it could completely flush mempools, keep
>>> transactions fees low and increase throughput without an increase in the
>>> block size that would raise other concerns related to propagation.
>>>
>>> Thank you.
>>> I look forward to your responses.
>>>
>>> --
>>> Marcos Mayorga
>>> https://twitter.com/KatlasC
>>>
>>> _______________________________________________
>>> bitcoin-dev mailing list
>>> bitcoin-dev@lists•linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>>
>>
>>
>

[-- Attachment #2: Type: text/html, Size: 11717 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [bitcoin-dev] brickchain
  2022-11-08 15:49         ` Erik Aronesty
@ 2022-11-08 16:31           ` mm-studios
  0 siblings, 0 replies; 12+ messages in thread
From: mm-studios @ 2022-11-08 16:31 UTC (permalink / raw)
  To: Erik Aronesty; +Cc: Bitcoin Protocol Discussion

[-- Attachment #1: Type: text/plain, Size: 7302 bytes --]

------- Original Message -------
On Tuesday, November 8th, 2022 at 3:49 PM, Erik Aronesty <erik@q32•com> wrote:

>> I think it's pretty clear that the "competitive nature of PoW" is not referring to verification nodes
>
> cool, so we can agree there is no accepted centralization pressure for validating nodes then

The centralization produced by PoW only affects miners. the rest of nodes are freely distributed.
in the producer-consumer view consumers (blockchain builders) are satisfactorily distributed. It can't be said so about miners(block producers), who form a quite centralized subsystem with only a handful major pools producing blocks.

>> layers also add fees to users
>
> source? i feel like it's obvious that the tree-like efficiencies should reduce fees, but i'd appreciate your research on that topic

systems(layers) where abuse is controlled by fees add up each one a cost.

> On Tue, Nov 8, 2022 at 9:25 AM mm-studios <mm@mm-studios•com> wrote:
>
>> ------- Original Message -------
>> On Tuesday, November 8th, 2022 at 2:16 PM, Erik Aronesty <erik@q32•com> wrote:
>>
>>>> A) to not increase the workload of full-nodes
>>>
>>> yes, this is critical
>>>
>>>> given the competitive nature of PoW itself
>>>
>>> validating nodes do not compete with PoW, i think maybe you are not sure of the difference between a miner and a node
>>>
>>> nodes do validation of transactions, they do this for free, and many of them provide essential services, like SPV validation for mobile
>>
>> I think it's pretty clear that the "competitive nature of PoW" is not referring to verification nodes (satoshi preferred this other word).
>>
>>> B) to not undermine L2 systems like LN.
>>>
>>> yes, as a general rule, layered financial systems are vastly superior. so that risks incurred by edge layers are not propagated fully to the inner layers. For example L3 projects like TARO and RGB are building on lightning with less risk
>>
>> layers also add fees to users
>>
>>> On Wed, Oct 19, 2022 at 12:04 PM mm-studios <mm@mm-studios•com> wrote:
>>>
>>>> Thanks all for your responses.
>>>> so is it a no-go is because "reduced settlement speed is a desirable feature"?
>>>>
>>>> I don';t know what weights more in this consideration:
>>>> A) to not increase the workload of full-nodes, being "less difficult to operate" and hence reduce the chance of some of them giving up which would lead to a negative centralization effect. (a bit cumbersome reasoning in my opinion, given the competitive nature of PoW itself, which introduce an accepted centralization, forcing some miners to give up). In this case the fact is accepted because is decentralized enough.
>>>> B) to not undermine L2 systems like LN.
>>>>
>>>> in any case it is a major no-go reason, if there is not intention to speed up L1.
>>>> Thanks
>>>> M
>>>>
>>>> ------- Original Message -------
>>>> On Wednesday, October 19th, 2022 at 3:24 PM, Erik Aronesty <erik@q32•com> wrote:
>>>>
>>>>>> currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>>>
>>>>> reduced settlement speed is a desirable feature and isn't something we need to fix
>>>>>
>>>>> the focus should be on layer 2 protocols that allow the ability to hold & transfer, uncommitted transactions as pools / joins, so that layer 1's decentralization and incentives can remain undisturbed
>>>>>
>>>>> protocols like mweb, for example
>>>>>
>>>>> On Wed, Oct 19, 2022 at 7:34 AM mm-studios via bitcoin-dev <bitcoin-dev@lists•linuxfoundation.org> wrote:
>>>>>
>>>>>> Hi Bitcoin devs,
>>>>>> I'd like to share an idea of a method to increase throughput in the bitcoin network.
>>>>>>
>>>>>> Currently, a miner produce blocks with a limited capacity of transactions that ultimately limits the global settlement throughput to a reduced number of tx/s.
>>>>>>
>>>>>> Big-blockers proposed the removal of limits but this didn't come with undesirable effects that have been widely discussed and rejected.
>>>>>>
>>>>>> The main feature we wanted to preserve is 'small blocks', providing 'better network effects' I won't focus on them.
>>>>>>
>>>>>> The problem with small blocks is that, once a block is filled transactions, they are kept back in the mempool, waiting for their turn in future blocks.
>>>>>>
>>>>>> The following changes in the protocol aim to let all transactions go in the current block, while keeping the block size small. It requires changes in the PoW algorithm.
>>>>>>
>>>>>> Currently, the PoW algorithm consists on finding a valid hash for the block. Its validity is determined by comparing the numeric value of the block hash with a protocol-defined value difficulty.
>>>>>>
>>>>>> Once a miner finds a nonce for the block that satisfies the condition the new block becomes valid and can be propagated. All nodes would update their blockchains with it. (assuming no conflict resolution (orphan blocks, ...) for clarity).
>>>>>>
>>>>>> This process is meant to happen every 10 minutes in average.
>>>>>>
>>>>>> With this background information (we all already know) I go on to describe the idea:
>>>>>>
>>>>>> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity]
>>>>>> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled.
>>>>>>
>>>>>> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual.
>>>>>>
>>>>>> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual.
>>>>>>
>>>>>> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below).
>>>>>>
>>>>>> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block.
>>>>>>
>>>>>> This calculus shall be better defined, but I hope that this idea can serve as a seed to a BIP, or otherwise deemed absurd, which might be possible and I'd be delighted to discover why a scheme like this wouldn't work.
>>>>>>
>>>>>> If it finally worked, it could completely flush mempools, keep transactions fees low and increase throughput without an increase in the block size that would raise other concerns related to propagation.
>>>>>>
>>>>>> Thank you.
>>>>>> I look forward to your responses.
>>>>>>
>>>>>> --
>>>>>> Marcos Mayorgahttps://twitter.com/KatlasC
>>>>>>
>>>>>> _______________________________________________
>>>>>> bitcoin-dev mailing list
>>>>>> bitcoin-dev@lists•linuxfoundation.org
>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

[-- Attachment #2: Type: text/html, Size: 13352 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-11-08 16:31 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-19  9:04 [bitcoin-dev] brickchain mm-studios
2022-10-19 13:40 ` angus
2022-10-19 22:47   ` mm-studios
2022-10-19 13:54 ` Bryan Bishop
2022-10-19 14:24 ` Erik Aronesty
2022-10-19 16:03   ` mm-studios
2022-10-19 21:34     ` G. Andrew Stone
2022-10-19 22:53       ` mm-studios
2022-11-08 14:16     ` Erik Aronesty
2022-11-08 14:25       ` mm-studios
2022-11-08 15:49         ` Erik Aronesty
2022-11-08 16:31           ` mm-studios

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox