public inbox for bitcoindev@googlegroups.com
 help / color / mirror / Atom feed
* [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination
@ 2015-11-04 22:47 Mark Friedenbach
  2015-11-05  9:23 ` Gavin Andresen
  0 siblings, 1 reply; 2+ messages in thread
From: Mark Friedenbach @ 2015-11-04 22:47 UTC (permalink / raw)
  To: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 3826 bytes --]

At the first Scaling Bitcoin workshop in Montreal I presented on the topic
of "bad blocks" that take an excessive amount of time to validate. You can
read a transcript of this talk here:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/alternatives-to-block-size-as-aggregate-resource-limits/

The core message was that the assumption made by the design parameters of
the system, namely that validation costs scale linearly with transaction or
block size, is wrong. In particular, in certain kinds of transactions there
are validation costs which scale quadraticly with size. For example, the
construction of SIGHASH_ALL results in each input signing a different
message digest, meaning that the entire transaction (minus the scriptSigs)
is rehashed for each input. As another example, the number of signature
operation performed during block validation is unlimited if the validations
are contained within the scriptPubKey (this scales linearly but with a very
large constant factor). The severity of these issues increase as the
aggregate limits in place on maximum transaction and block size increase.

There have been various solutions suggested, and I would like to start a
public discussion to see if consensus can be reached over a viable approach.

Gavin, for example, has written code that tracks the number of bytes hashed
and enforces a separate limit for a block over this aggregate value. Other
costs could be constrained in a similar whack-a-mole way. I have two
concerns with this approach:

1. There would still exist a gap between the average-case validation cost
of a full block and the worst case validation cost of a block that was
specifically constructed to hit every limit.

2. Transaction selection and by extension fee determination would become
much more complicated multi-dimensional optimization problems. Since fee
management in particular is code replicated in a lot of infrastructure, I
would be very concerned over making optimal behavior greatly more difficult.

My own suggestion, which I submit for consideration, is to use a linear
function of the various costs involved (signatures verified, bytes hashed,
inputs consumed, script opcodes executed, etc.). The various algorithms
used for transaction selection and fee determination can then be reused,
using the output of this new linear function as the "size" of the
transaction.

Separately, many others including Greg Maxwell have advocated for a
"net-UTXO" metric instead of, or in combination with a validation-cost
metric. In the pure form the block size limit would be replaced with a
maximum UTXO set increase, thereby applying a cost in extra fee required to
create unspent outputs. This has the distinct advantage of making dust
outputs considerably more expensive than regular spend outputs.

For myself, I remain open to the possibility of adding a UTXO set size
corrective factor to a chiefly validation-cost metric. It would be nice to
reward users for cleaning up scattered small output, reward miners for
including dust-be-gone outputs, and make spam attacks more costly. But
doing so requires setting aside some unused validation resources in order
to reward miners who clean up the UTXO, which means it widens the gap
between average and worst case block validation times. Also, worry over the
size of the UTXO database is only a concern for how Bitcoin Core is
currently structured -- with e.g. UTXO or STXO commitments it could be the
case that in the future full nodes do not store the UTXO and instead carry
proofs of their inputs as prunable witness data. If we choose a net-UTXO
metric however, we will be stuck with it for some time.

I will be submitting a talk proposal for Scaling Bitcoin on this topic, but
I would like to get some feedback from the developer community first.
Anyone have any thoughts to add?

[-- Attachment #2: Type: text/html, Size: 4073 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination
  2015-11-04 22:47 [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination Mark Friedenbach
@ 2015-11-05  9:23 ` Gavin Andresen
  0 siblings, 0 replies; 2+ messages in thread
From: Gavin Andresen @ 2015-11-05  9:23 UTC (permalink / raw)
  To: Mark Friedenbach; +Cc: Bitcoin Dev

[-- Attachment #1: Type: text/plain, Size: 3635 bytes --]

I have several thoughts:

Weighing CPU validation cost should be reasonably straightforward-- just
pick some arbitrary, commonly-available, recent hardware and then benchmark
the two things that take the bulk of validation time (hashing to create the
signature hash, then ECDSA validation), and weigh the terms in the
validation cost equation appropriately (e.g. hashing X GB of data takes the
same amount of CPU time as one libsecp256k1 validation, so count cpu cost
of an OP_CHECKSIG as 1 + X/actual_bytes_hashed).

But how should bandwidth cost be counted? There isn't an obvious "Y GB of
bandwidth-per-month equals 1 ECDSA validation. We need to find common units
for the terms in the validation cost equation for it to make sense,
otherwise we're adding apples and oranges.

I think the only units that will work is "percentage of maximum validation
ability for some reference hardware running with a network connection
capable of some reference bandwidth."

For example, imagine the reference was the typical home computer being sold
today running with some multiple or fraction of the average global
broadband connection speed of 5Mbps. CPU cost to validate a block can then
be expressed as a percentage of maximum capacity, as can bandwidth--
hooray, two metrics with the same units, so they can be added up.  If the
result is less than 100%, then the block is valid-- it can be received and
validated in a reasonable amount of time.


Rolling in UTXO growth is harder, for two reasons:
1) UTXO changes per block can be negative or positive, as opposed to
bandwidth/CPU costs.
2) It is not clear how to choose or benchmark "reference UTXO growth"

(1) could be finessed to just treat UTXO shrinkage as zero.
(2) could just be decided by picking a reasonable growth number. Since we
want the UTXO set to fit into main memory, something a bit below the
long-ish term price/performance trend of main memory would be a good target.

So, starting with that growth rate and an initial UTXO size in bytes,
divide by the number of blocks in a year to get a maximum UTXO growth in
bytes per block.

When validating a block, take the actual UTXO growth, express it as a
percentage of the maximum allowed (make it zero if it is negative), and
combine with the CPU and bandwidth percentages.

If the total is less than 100%, block is valid. Otherwise, invalid.

----------------

Now.... all of that worked through, I'm not 100% sure it solves the "do
miners or wallet have to solve a bin-packing problem to determine which
transactions to put into their blocks or what fees to attach."

I think it mostly works out-- instead of fee-per-kilobyte, it would be
fee-per-validation-cost (which is in the weird units "fraction of 100%
validation cost").

But the UTXO term might be a problem-- transactions that create more UTXOs
than they spend might end up being costly. I'm traveling right now, perhaps
somebody could pick some arbitrary reference points and try to get a rough
idea of what different transactions might pay in fees (e.g. if a
one-input-two-output had a cost of X, two-output-one-input would have a
cost of X/something).

I'm not convinced that a single validation cost metric is the best
approach-- it might be better to break the cost into three (UTXO growth,
CPU, and bandwidth) and just let miners set reasonable transaction
selection policies that keep each of the three under whatever caps are
imposed on each. If a miner comes up with a clever algorithm that lets them
pack in more transactions and get more fees, good for them!

But I do like the simplicity of a single validation cost metric.

-- 
--
Gavin Andresen

[-- Attachment #2: Type: text/html, Size: 4220 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-11-05  9:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-04 22:47 [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination Mark Friedenbach
2015-11-05  9:23 ` Gavin Andresen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox