On Wed, Sep 13, 2017 at 08:27:36AM +0900, Karl Johan Alm via bitcoin-dev wrote: > On Wed, Sep 13, 2017 at 4:57 AM, Mark Friedenbach via bitcoin-dev > wrote: > >> Without the limit I think we would be DoS-ed to dead > > > > 4MB of secp256k1 signatures takes 10s to validate on my 5 year old > > laptop (125,000 signatures, ignoring public keys and other things that > > would consume space). That's much less than bad blocks that can be > > constructed using other vulnerabilities. > > Sidenote-ish, but I also believe it would be fairly trivial to keep a > per UTXO tally and demand additional fees when trying to respend a > UTXO which was previously "spent" with an invalid op count. I.e. if > you sign off on an input for a tx that you know is bad, the UTXO in > question will be penalized proportionately to the wasted ops when > included in another transaction later. That would probably kill that > DoS attack as the attacker would effectively lose bitcoin every time, > even if it was postponed until they spent the UTXO. The only thing > clients would need to do is to add a fee rate penalty ivar and a > mapping of outpoint to penalty value, probably stored as a separate > .dat file. I think. Ethereum does something quite like this; it's a very bad idea for a few reasons: 1) If you bailed out of verifying a script due to wasted ops, how did you know the transaction trying to spend that txout did in fact come from the owner of it? 2) How do you verify that transactions were penalized correctly without *all* nodes re-running the DoS script? 3) If the DoS is significant enough to matter on a per-node level, you're going to have serious problems anyway, quite possibly so serious that the attacker manages to cause consensus to fail. They can then spend the txouts in a block that does *not* penalize their outputs, negating the deterrent. -- https://petertodd.org 'peter'[:-1]@petertodd.org