------- Original Message ------- On Wednesday, October 19th, 2022 at 2:40 PM, angus wrote: >> Let's allow a miner to include transactions until the block is filled, let's call this structure (coining a new term 'Brick'), B0. [brick=block that doesn't meet the difficulty rule and is filled of tx to its full capacity] >> Since PoW hashing is continuously active, Brick B0 would have a nonce corresponding to a minimum numeric value of its hash found until it got filled. > > So, if I'm understanding right, this amounts to "reduce difficulty required for a block ('brick') to be valid if the mempool contains more than 1 block's worth of transactions so we get transactions confirmed faster" using 'bricks' as short-lived sidechains that get merged into blocks? They wouldn't get confirmed faster. Imagine a regular Big Block (BB) could be re-structured as a brickchain BB = B0 <- B1 <- ... <- Bn (Block = chain of bricks) Only B0 contains the coinbase transaction. Bi are streamed from miner to nodes as they are produced. The node creates a separate fork on B0 arrival, and on arrival of the last B1 treats the whole brickchain as they now treat a 1 Block: Either accept it or reject it as as a whole. (like is the complete block had just arrived entirely. (In reality it has arrived as a stream of bricks). Before the brickchain is complete the node does nothing special, just validate each brick on arrival and wait for the next. > This would have the same fundamental problem as just making the max blocksize bigger - it increases the rate of growth of storage required for a full node, because you're allowing blocks/bricks to be created faster, so there will be more confirmed transactions to store in a given time window than under current Bitcoin rules. Yes, the data transmitted over the network is bigger, because we are intentionally increasing the throughput, instead of delaying tx in the mempool. This is a potential howto in case there was an intention of speeding up L1. The unavoidable price of speed in tx/s is bandwidth and volume of data to process. The point is to do it without making bigger blocks. > Bitcoin doesn't take the size of the mempool into account when adjusting the difficulty because the time-between-blocks is 'more important' than avoiding congestion where transactions take ages to get into a block. The fee mechanism in part allows users to decide how urgently they want their tx to get confirmed, and high fees when there is congestion also disincentivises others from transacting at all, which helps arrest mempool growth. streaming bricks instead of delivering a big block can be considered a way of reducing congestion. This is valid at any scale. E.g. 1 Mb block delivered at once every 10 minutes versus a stream of 10 100Kib-brick delivered 1 per minute > I'd imagine we'd also see a 'highway widening' effect with this kind of proposal - if you increase the tx volume Bitcoin can settle in a given time, that will quickly be used up by more people transacting until we're back at a congested state again. > >> Fully filled brick B0, with a hash that doesn't meet the difficulty rule, would be broadcasted and nodes would have it on in a separate fork as usual. congestion can alweays happen with enough workload. A system able to determine its workload can regulate it (keeping tx in the mempool to temporarily aliviate). The congestion counter-measure remains in essence the same. > How do we know if the hash the miner does find for a brick was their 'best effort' and they're not just being lazy? There's an element of luck in the best hash a miner can find, sometimes it takes a long time to meet the difficulty requirement and sometimes it happens almost at instantly. a lazy miner will produce a longer brickchain because they would have greater hashes than a more powerful miner. A more competitive miner will deliver the complete brickchain faster and hence its half-way through brickchain will be discarded. It is exactly like the current system and a lazy miner. > How would we know how 'busy' the mempool was at the time a brick from months or years ago was mined? I dont understand the question, but i guess it is the same answer replacing bricks for blocks > Nodes have to be able to run through the entire history of the blockchain and check everything is valid. They have to do this using only the previous blocks they've already validated - they won't have historical snapshots of the mempool (they'll build and mutate a UTXO set, but that's different). Transactions don't contain a 'created-at' time that you could compare to the block's creation time (and if they did, you probably couldn't trust it). Why does this question apply to the concept of bricks and not to the concept of block? I see a resulting blockchain would be a chain of blocks and bricks: Bi = Block at height i bi = brick at height i Chain of blocks and bricks: B1 <- B2 < b3,1 <- b3,2 <- b3,3 <- B4 <- B5 <- b6,1 <- b6,2 <- B7 ... --------------------------- ----------------- equivalent to B3 eq to B6 > With the current system, Nodes can calculate what the difficulty should be for every block based on those previous blocks' times and difficulties - but how would you know an old brick was valid if its difficulty was low but at the time the mempool was busy, vs. getting a fraudulent brick that is actually invalid because there isn't enough work in it? You can't solve this by adding some mempoolsize field to bricks, as you'd have to blindly trust miners not to lie about them. The moment a non fully brick arrives to a node it is considered a complete brickchain and would be treated as a block. you start working on a brick it can't be filled, you find the mempool empty or want to send it for any reason. It constitutes a valid brickchain because it has 1 brick with only the last brick not completely filled, the rest of previous bricks (0 in this corner case) are fully filled. > If we can't be (fairly) certain that a miner put a minimum amount of work into finding a hash, then you lose all the strengths of PoW. The PoW is still working the same because al the brickchain is accepted or rejected by a miner in an atomic way (i.e. as a block, with an aggregated PoW that must beat other possible blocks to stay in the longest chain). > If you weaken the difficulty requirement which is there so that mining blocks is hard so that it is very hard to intentionally fork the chain, re-mine previous blocks, overtake the other fork, and get the network to re-org onto your chain - then there's no Proof of work undergirding consensus in the ledger's state. if the brickchain is treated as an indivisible structure once is fully received there is no difference with the strengtt of blocks. The technique doesn't weaken, it spreads in time, but every 10 minutes the amount of energy invested in the whole brickchain is what counts. > Secondly, where does the block reward go? Do brick miners get a fraction of the reward proportionate to the fraction of the difficulty they got to? Later when bricks become part of a block, who gets the block reward for that complete block? Who gets the fees? No miner is going to bother mining a merge-bricks-into-block block if the reward isn't the same or better than just mining a regular block, but each miner of the bricks in it would also want a reward. But, we can't give them both a block reward as that'd increase Bitcoin's issuance rate, which might be the only thing people are more strongly opposed to than increasing the blocksize! xD the coinbase tx can go in the first brick. (only in one brick of the brickchain) Since 1 brickchain it is treated as 1 block, the coinbase tx can be the first one in the first brick. reward only happens when the block is buried in the blockchain. In the same way, reward happens when the complete brickchain is buried in the block/brickchain. >> At this point, instead of discarding transactions, our miner would start working on a new brick B1, linked with B0 as usual. >> >> Nodes would allow incoming regular blocks and bricks with hashes that don't satisfy the difficulty rule, provided the brick is fully filled of transactions. Bricks not fully filled would be rejected as invalid to prevent spam (except if constitutes the last brick of a brickchain, explained below). >> >> Let's assume that 10 minutes have elapsed and our miner is in a state where N bricks have been produced and the accumulated PoW calculated using mathematics (every brick contains a 'minimum hash found', when a series of 'minimum hashes' is computationally equivalent to the network difficulty is then the full 'brickchain' is valid as a Block. > > But the brick sidechain has to become part of the main blockchain - and as you've got N bricks in the time that there should be 1 block, and each brick is a full block, it feels like this is just a convoluted way to increase the blocksize? Every transaction has to be in the ledger somewhere to be confirmed, so even if the block itself is small and stored references to the bricks, Nodes are going to have to use storage to keep all those full bricks. except that bigblocks are, as i mentioned earlier, sent at once. while this model is more like streaming a block (spread the data in time) > It also seems that you'd have to require the bricks sidechain to always be merged into the next actual block - it wouldn't work if the brick chain could keep growing and at the same time the actual blockchain advances (because there'd be risks of double-spends where one tx is in the brick chain and the other in the new block). Which I think further makes this feel like a roundabout way of increasing the blocksize not merged, the brickchain would be appended to the previous (block or brickchain) of the main chain (I don't know how to call it now lol) > Despite my critique, this was interesting to think about - and hopefully this is useful (and hopefully I've not seriously misunderstood or said something dumb) Thanks for your considerations. I am defending the idea in real time. perhaps in one of those I am caught in a impossibility, until then, let's try to find the catch, it it exists. : ) Marcos > Angus