On Fri, Jul 6, 2018 at 6:00 PM, Gregory Maxwell <greg@xiph.org> wrote:
On Fri, Jul 6, 2018 at 9:05 PM, Russell O'Connor via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> If the inputs to hash were reordered as hash(bytes(dG) || bytes(x(R)) || m)
> then there is an opportunity for SHA256 expander to be partially prefilled
> for a fixed public key.  This could provide a little benefit, especially
> when multiple signatures for a single public key need to be generated and/or
> verified.  If all things are otherwise equal, perhaps this alternate order
> is better.

There is a minor design preference to have message before nonce when
H() is a MD-style hash function.  Say the attacker knows some weakness
in H and can find pairs of messages m and m' so that the compression
function results in the same midstate.  He could then ask you to sign
m but get a signature that also works for m'.   If the signer
controlled R value comes first, then this doesn't work.    The pubkey
being where it is in the current design just follows from the idea
that it is just logically prepended on the message.  I don't think the
pubkey is sufficiently attacker controlled that the above argument
would apply,  so H(P || R.x || m) would be okay.

BUT, the sha256 compression function reads 64 bytes at a time. PRM
would not let you precompute a whole compression function run, but
instead would just let you hardwire part of the expander in a pubkey
dependant way-- an optimization I'm pretty confident virtually no one
would use.  (Hardwiring to a constant, yes. Hardwiring to a reused
dynamic value that comes in from the network, no)

Right.  I readily admit my proposal has extremely marginal efficiency benefits. However, I didn't realize there is also an extremely marginal security benefit to placing the nonce in front of everything.  Although these things are so marginal that it is perhaps a waste of time to even be considering them, I think I'd judge the extremely marginal security benefit to exceed the value of the extremely marginal efficiency gain.  It's probably best to leave the nonce at the beginning after all.
 
If instead the hash function were defined as using 31 zeros then
P||R||m (or P || 31 zeros bytes || R || m, I'm not sure what would be
better), an entire midstate could be cached for different pubkeys. m
is often 32 bytes, sadly- - but the final compression run in that case
would only be the constant update with the length.... and
almost-all-zeros + constant length, is an easy optimization. (Bitcoin
core even has it for computing sha256(sha256())).

I did consider this, however the 31 bytes of zeros, plus the SHA256 padding means we would need to compress *three* blocks in general instead of the current proposal of just two blocks.  This burden seems to exceed the benefit of maybe sometimes getting a slightly fast two-blocks-with-lots-of-zeros when public keys are reused. I wouldn't recommend it.

There is an alternative of just dropping the SHA-256 length padding.  This would still be secure in this context because the data is of fixed size.  However, I doubt it is worth breaking the API of every SHA-256 library in existence to enable that.
 
[I'm not really sure if I was clear, so I'll try TLDRing it:  I think
optimizing sha256 where part of the input is constant is realistic,
optimizing midstate reuse is realistic, optimizing where part is
reused is less realistic.  If we insert padding, and put P first, we
can make it possible to midstate cache P,  and the 'extra' compression
function run ends up with all constant input, so it could be made
faster.]