--- Day changed Thu Nov 30 2017 01:39 -!- nickler [~nickler@185.12.46.130] has quit [Ping timeout: 240 seconds] 01:39 -!- [b__b] [~b__b]@ec2-54-85-45-223.compute-1.amazonaws.com] has quit [Remote host closed the connection] 01:39 -!- nickler [~nickler@185.12.46.130] has joined #secp256k1 01:40 -!- [b__b] [~b__b]@ec2-54-85-45-223.compute-1.amazonaws.com] has joined #secp256k1 04:29 -!- SopaXorzTaker [~SopaXorzT@unaffiliated/sopaxorztaker] has joined #secp256k1 06:38 -!- SopaXorzTaker [~SopaXorzT@unaffiliated/sopaxorztaker] has quit [Remote host closed the connection] 06:39 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Remote host closed the connection] 06:40 -!- arubi [~ese168@gateway/tor-sasl/ese168] has joined #secp256k1 07:40 -!- arubi [~ese168@gateway/tor-sasl/ese168] has quit [Remote host closed the connection] 07:41 -!- arubi [~ese168@gateway/tor-sasl/ese168] has joined #secp256k1 07:51 -!- jtimon [~quassel@164.31.134.37.dynamic.jazztel.es] has joined #secp256k1 09:44 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has joined #secp256k1 11:05 -!- SopaXorzTaker [~SopaXorzT@unaffiliated/sopaxorztaker] has joined #secp256k1 11:58 -!- SopaXorzTaker [~SopaXorzT@unaffiliated/sopaxorztaker] has quit [Remote host closed the connection] 12:00 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has quit [Ping timeout: 240 seconds] 12:03 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has joined #secp256k1 12:09 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has quit [Ping timeout: 264 seconds] 12:23 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has joined #secp256k1 13:21 -!- jtimon [~quassel@164.31.134.37.dynamic.jazztel.es] has quit [Ping timeout: 248 seconds] 14:01 < hdevalence> How would I fairly benchmark the cost of using secp256k1 to benchmark double-base vartime scalar mult? i.e., a*A + b*B for public scalars a,b, varying public point A, fixed basepoint B 14:03 < hdevalence> I looked at the bench_verify source and it's doing various signature-specific stuff that I don't really want to include in the measurement 14:05 -!- jtimon [~quassel@164.31.134.37.dynamic.jazztel.es] has joined #secp256k1 14:07 < sipa> hdevalence: look at nickler's PR that adds multi-multiplication 14:07 < sipa> it has benchmarks 14:07 < sipa> also, what do you mean by fixed B? 14:08 < sipa> if by fixed you mean as fixed as the generator G, for which we have a large set of precomputed multiples built into the binary... there are advantages :) 14:08 < hdevalence> yes, that's what I mean 14:09 < sipa> well then you already have it, no? 14:09 < hdevalence> fixed meaning that precomputation is allowed, varying meaning that precomputation is not 14:09 < sipa> oh, you mean there is no ecmult benchmark, only a ecdsa benchmark 14:09 < hdevalence> yeah 14:09 < sipa> though ecdsa verification is almost entirely just an ecmult 14:10 < hdevalence> I thought there was a scalar inversion? 14:11 < sipa> right 14:12 < sipa> well soon we'll have an ecmult benchmark fr various numbers of input parameters 14:13 < hdevalence> cool! 14:18 < hdevalence> if I just subtract the scalar inversion cost from bench_internal I get 227 kcy on SKX 14:19 < hdevalence> without endomorphisms 14:29 < gmaxwell> hdevalence: out of curiousity, whats your application? 14:30 < hdevalence> I implemented a prime-order group and I want to compare it to other prime-order groups 14:31 < gmaxwell> okay, good! If you said you were going to do something where having a cofactor of 1 wasn't important I would tell you to not bother with secp256k1 and just use edXXX stuff. 14:32 < gmaxwell> I think you'll find libsecp256k1 to be the fastest prime order group implementation in this size range, barring perhaps some exotic stuff. If you find anything that is even tying with it, we'd love to know so we could steal the optimizations. :) 14:32 < hdevalence> :D 14:32 < gmaxwell> or at least fastest with endo in use, and competative otherwise. 14:32 < hdevalence> well, there's the Ristretto implementation in -dalek 14:33 < gmaxwell> yes, though you can't decaff ed25519 I thought? (though you can for ed448)? 14:33 < hdevalence> you can, it's just more work 14:34 < hdevalence> it's not really specified in the decaf paper, so when Isis and I implemented it we ended up with a different encoding than Mike Hamburg did 14:34 < hdevalence> so "Ristretto" is a tweaked version of decaf that works with cofactor 8 14:35 < gmaxwell> ah. Wasn't aware of that, I'd still expect libsecp256k1-endo to be clearly faster. dunno without. As an aside do you know anything about the IPR status of decaf? The fact that Mike Hamburg works for rambus and works on this stuff makes me feel a little uneasy. 14:36 < gmaxwell> (perhaps you can tell I haven't looked that far into decaf beyond knowing the basics) 14:37 < hdevalence> re IPR, I don't know off the top of my head 14:38 < gmaxwell> fair enough. 14:39 < gmaxwell> I've just kinda been hoping it would end up in an IETF draft and trigger disclosures. 14:41 < hdevalence> on the timing front, I see 185kcy for -dalek's double-base scalar mult 14:42 < gmaxwell> interesting! 14:42 < gmaxwell> is that operating in the decaf group or is there a encoding stage that is used in -dalek? 14:43 < gmaxwell> if that holds with your above numbers then libsecp256k1 should only be just slightly faster/tied with endo in use. How big a precomputation does dalek use? 14:43 < hdevalence> our RistrettoPoint is just a newtype wrapper around an edwards ExtendedPoint, so the Ristretto cost only comes into play when doing encode/decode 14:44 < hdevalence> for single points, that cost is essentially the same as a normal edwards decode/encode 14:46 < hdevalence> the precomputation for double-base scmul is essentially the same as other ed25519 implementations 14:46 < hdevalence> 8 points 14:46 < gmaxwell> How expensive is your decompression relative to scalar mult? it's non-trivial for us and because 2^255-19 is congruent to 1 mod 3 I might guess your sqrt is slightly slower. 14:48 < gmaxwell> (for our aggregate signature validation, we can end up spending more time on point decompression than the multiexp in plausable bitcoin usage, which is kind of amusing.) 14:50 < hdevalence> uh, one min 14:50 < hdevalence> iirc it's something like 1/3 of the cost of a fixed-base scmul, 1/10 the cost of a double-base scmul 14:51 < gmaxwell> I think thats pretty similar to libsecp256k1, pretty cool. 14:52 < gmaxwell> Have you seen much interest in migrating up from ~256bit groups? post NSA recommendation to use 384 bits or larger? 14:52 < hdevalence> ¯\_(ツ)_/¯ 14:52 < gmaxwell> haha 14:53 < gmaxwell> for us in bitcoin land we have a group chosen for us, so we get a bit of an excercise in constrained development. 14:53 < hdevalence> I mean my assumption is that any ECC break that would cause 256-bit groups to be insecure would also affect 384-bit groups 14:56 < gmaxwell> hdevalence: so the argument that Dan Boneh gave me is that particularly for quantum computers that there are serial steps in the attacks which grow with the key size, and must be significantly smaller than cohearence time... and longer coherence time seems to get exponentially harder; so it is plausable that small increases in group size could make for real pratical robustness differences. 14:56 < hdevalence> aha 14:56 < hdevalence> okay, that sounds like it might be plausible 14:57 < sipa> but is there any reason why 256 in particular would not be enough 14:57 < gmaxwell> Obviously it depends on the application, there are plenty of applications where the crypto performance/space hardly matters, though for those I dunno why you wouldn't just use something using 2^521-1 as the field. 14:58 < sipa> right 14:59 < gmaxwell> sipa: not that is known to the public; but it's interesting that NSA made public advice. Other than a conspiracy theory that they were trying to discourage the use of some 256 bit group specifically without admission to which one (because of a curve specific vuln, or because of backdoored parameters in the most likely 384 bit choices)... you'd think they wouldn't do this without some actual reason. 15:01 < hdevalence> gmaxwell: https://gist.github.com/hdevalence/153a8dc57c6cfb7069281318efd570ba 16:02 -!- bsm117532 [~mcelrath@c-73-119-55-73.hsd1.ma.comcast.net] has joined #secp256k1 16:02 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has quit [Ping timeout: 240 seconds] 16:13 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has joined #secp256k1 17:41 -!- hdevalence [~hdevalenc@199-188-193-243.PUBLIC.monkeybrains.net] has quit [Quit: hdevalence] 18:22 -!- midnightmagic_ [~midnightm@unaffiliated/midnightmagic] has joined #secp256k1 18:27 -!- Netsplit over, joins: [b__b] 19:24 -!- jtimon [~quassel@164.31.134.37.dynamic.jazztel.es] has quit [Ping timeout: 248 seconds] 21:59 -!- midnightmagic_ is now known as midnightmagic 22:31 -!- bsm117532 [~mcelrath@c-73-119-55-73.hsd1.ma.comcast.net] has quit [Quit: Leaving.]