Well, 40 bytes reduction per input is excessive too :) But 30 bytes reduction will do fine. On Thu, Jul 13, 2017 at 12:10 AM, Sergio Demian Lerner < sergio.d.lerner@gmail.com> wrote: > Some responses.. > >> >> The proposal adds another gratuitous limit to the system: A maximum >> transaction size where none existed before, yet this limit is almost >> certainly too small to prevent actual DOS attacks while it is also >> technically larger than any transaction that can be included today >> (the largest possible transaction today is 1mb minus the block >> overheads). The maximum resource usage for maliciously crafted 1MB >> transaction is enormous and permitting two of them greatly exacerbates >> the existing vulnerability. >> >> > I think that limiting the maximum transaction size may not be the best > possible solution to the N^2 hashing problem, yet it is not a bad start. > > There are several viable soft-forking solutions to it: > > 1- Soft-fork to perform periodic reductions in the maximum non-segwit > checksigs per input (down to 20) > 2- Soft-fork to perform periodic reductions in the number of non-segwit > checksigs per transaction. (down to 5K) > 3- Soft-fork to perform periodic reductions in the amount of data hashed > by non-segwit checksigs. > > Regardless which one one picks, the soft-fork can be deployed with enough > time in advance to reduce the exposure. The risk is still low. Four years > have passed since I reported this vulnerability and yet nobody has > exploited it. The attack is highly anti-economical, yet every discussion > about the block size ends up citing this vulnerability. > > , > >> > Assuming the current transaction pattern is replicated in a 2 MB >> plain-sized block that is 100% filled with transactions, then the >> witness-serialized block would occupy 3.6 MB >> >> But in a worst case the result would be 8MB, which this document fails >> to mention. >> > > I will mention this worst case in the BIP. > > Even if artificially filling the witness space up to 8 MB is > anti-economical, Segwit exacerbates this problem because each witness byte > costs 1/4th of a non-witness byte, so the block bloat attack gets cheaper > than before. I think the guilt lies more in Segwit discount factor than in > the plain block size increase. > I would remove the discount factor altogether, and add a fixed (40 bytes) > discount for each input with respect to outputs (not for certain input > types), to incentivize the cleaning of the UTXO set. A discount for inputs > cannot be used to bloat an unlimited number of blocks, because for each > input the attacker needs to first create an output (without discount). > There is no need to incentivize removing the signatures from blocks, > because there is already an incentive to do so to save disk space. > > >> >> > Deploy a modified BIP91 to activate Segwit. The only modification is >> that the signal "segsignal" is replaced by "segwit2x". >> >> This means that BIP-91 and your proposal are indistinguishable on the >> network, because the string "segsignal" is merely a variable name used >> in the software. >> >> No, it is exposed to the rpc mining interface (getblocktemplate). It must > be redefined, even if it's not a consensus change. > > >>