--- Log opened Tue Oct 22 00:00:44 2019 01:44 < real_or_random> grr I was not aware of https://github.com/bitcoin/bitcoin/commit/a3603ac6f07966036e56554cd754a57791a3491a#diff-2c33d0ff0ed53b0902e4af9a745a152f 01:45 < real_or_random> which fixes some of the issues in core's copy of ecdsa_signature_parse_der_lax which is taken from the libsecp256k1 contrib/ 01:46 < real_or_random> I fixed the same in https://github.com/bitcoin-core/secp256k1/commit/ec8f20babd8595f119e642d3833ee90bacc12873 upstream 01:47 < real_or_random> I know there's a reason why der_lax is only in contrib but code duplication is just evil ^^ 01:47 < gmaxwell> your patch was nicer. 01:48 < gmaxwell> :) 01:48 < real_or_random> yep, and addresses more issues :) 01:49 < real_or_random> also we forgot to change core's copy when updating the secp256k1 subtree (becaue it's not in the subtree ^^) 01:51 < gmaxwell> real_or_random: to be fair, the changes are certificational, and not likely to create actual issues in practice. 01:51 < real_or_random> yes indeed 01:52 < gmaxwell> (unless you know some real system that will sometimes allocate structs a few bytes short of the size_t boundary. :) But indeed, good thing to note. I /think/ there may be some automation in bitcoin core to yell at people when they edit files that come from other repors, perhaps it should be nagging on the contrib things there too. 01:53 < gmaxwell> and sorry, I tuned out of that original PR after some dumb sub argument about the original 'intentionally openssl like' handing of large encodings. 01:54 < real_or_random> yes i've seen it 01:55 < gmaxwell> How'd you notice the discrepency? 01:56 < real_or_random> I was about to reply to https://github.com/rust-bitcoin/rust-secp256k1/issues/168#issuecomment-544766667 01:57 < gmaxwell> amusingly, for your PR in libsecp256k1 I fished for additional reviewers from bitcoin core, so several people also missed the discrepency. 02:12 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 02:15 < gmaxwell> real_or_random: I dunno about what the goals of rust-bitcoin are but an extra datapoint is that MANY altcoins didn't apply strict der like ..ever .. or at least not until much later than bitcoin and have all sorts of crazy signatures that bitcoin doesn't have. 02:16 < gmaxwell> The laxness required for the historical bitcoin chain is much less than the laxness permitted by that code. ... but some other random altcoins need much more. 02:17 < gmaxwell> (in fact PPcoin or peercoin or whatever its been renamed to now was even split into multiple networks via the openssl size_t behavior inconsistency) 02:23 < real_or_random> good point. let me copy those three messages to the issue 02:28 < gmaxwell> also, matching openssl's behavior as much as laxder parser does was quite hard. (and it intentionally didn't finish the job, because some of the openssl behavior is absurdly hard to match completely) 02:31 < gmaxwell> I certantly wouldn't recommend anyone else spend time doing it. If someone really wants the 'safty' of rust with this code, you'd probably do humanity a better service to write a riscv bytecode interperter and sandbox and just compile and run the C code in it. :P :P 02:32 < gmaxwell> well okay to be fair it would be faster to rewrite by just monkey-copying code from the C version though god knows what bugs doing that might add. :P 02:36 < real_or_random> hehe 03:38 -!- jonatack [~jon@54.76.13.109.rev.sfr.net] has joined #secp256k1 04:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 05:52 < elichai2> amm when I let the caller input the pubkeys, should I require the keys array to be the length of the signatures or the signatures*4? (the output will be sigs*4, i'm thinking about the input) 05:53 < elichai2> oh hmm the list of keys should be in unrelated length then the list of signatures. 05:53 < elichai2> so you can pass 50 keys but 2 signatures. and get back potentially up to 8 keys 05:55 < elichai2> altough at some point it would be faster to just do gej->ge instead of checking equality against 10X more keys 07:17 -!- jonatack [~jon@54.76.13.109.rev.sfr.net] has quit [Ping timeout: 268 seconds] 08:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 08:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 08:40 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 08:40 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 08:41 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 08:42 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 08:42 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 08:43 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 08:44 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 08:45 -!- roconnor [~roconnor@host-104-157-204-21.dyn.295.ca] has joined #secp256k1 09:50 -!- jonatack [~jon@2a01:e35:8aba:8220:6627:dad:d967:649d] has joined #secp256k1 10:48 -!- cfields [~cfields@unaffiliated/cfields] has joined #secp256k1 11:00 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 245 seconds] 11:05 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 11:15 < gmaxwell> elichai2: it should return the indexes. 11:15 < gmaxwell> not the keys 11:17 < gmaxwell> If you say there is a maximum of 32 keys input, it can just return a bitmask. 11:20 < sipa> gmaxwell: do you think the secp function should implement the exact consensus-critical OP_CHECKMULTISIG logic? 11:20 < sipa> another possibility is to have a function that given a list of public keys and a list of signature/message pairs finds which ones match witch, and you leave the order/abort/parseability/... up to code in the higher layer 11:20 < sipa> *which 11:21 < sipa> the downside of the latter approach is that it's possible that a single signature matches multiple public keys in your list... 11:23 < gmaxwell> yeah, thats kind of unfortunate. I think having to match all guarentees slower behavior. 11:24 < gmaxwell> The interface is simpler if it implements the bitcoin behavior, you pass in the threshold, it returns true, or false. 11:25 < sipa> well, it'd need to return true/false/abort 11:25 < sipa> and the function would need to be passed byte arrays instead of public keys, i think, as things depend on which ones are parseable 11:27 < sipa> and it would need to be passed a flag to enforce strictenc or not (standardness rule that requires more strict pubkey parsing) 11:28 < sipa> i feel that logic is better kept on the bitcoin side of things 11:28 < sipa> but the bitfield response works, if the function enforces correct ordering 11:33 < gmaxwell> I don't think it needs to return abort. You can do the abort processing by getting false. and knowing you truncated the key list, no? 11:34 < sipa> not necessarily, i think 11:34 < sipa> say you have a 3-of-5, and the last key is unparseable 11:35 < sipa> if the first sig doesn't match the first pubkey, the result is false, not abort 11:35 < sipa> eh, the first sig doesn't match any of the first 3 pubkeys 11:40 < gmaxwell> Is the unparsability check that smart? thats kind of obnoxious. 11:41 < sipa> yes, it is 11:41 < sipa> when not enough signatures remain to possibly meet the threshold, the loop exits 11:41 < sipa> and further pubkeys are not parsed anymore 11:41 < sipa> we even have explicit tests for that 11:42 < sipa> this is only relevant for policy rules, though 11:43 < sipa> actually, this is not relevant in practice 11:43 < sipa> as it is only observable when STRICTENC is set, but NULLFAIL is not set 11:43 < sipa> but policy currently sets both, and consensus sets neither 12:25 < roconnor> BTW, it was my hope that we would be able to remove the unparsability policy rule with key recovery. 12:26 < roconnor> And free up funds that are currently very hard to redeem. 12:29 < roconnor> (and as you can see the policy rule is quite akward). 12:30 < roconnor> That said, I'm not sure about the history of this policy, so I'm not sure what benefits it is supposed to bring. 12:53 < gmaxwell> the benefit is that there is a simply blunt rule in the consensus code that allowed you to not have to expect an underlying crypto library handles garbage keys in a consistent manner. 12:53 < gmaxwell> And for non-CMS I think the benefit is really clear and obvious. 12:54 < sipa> NULLFAIL made all of this a lot simpler 12:54 < sipa> i think? 12:54 < sipa> ah no, you can still have unused and unparseable pubkeys at the end 12:54 < gmaxwell> But not simpler for CMS. 12:55 < gmaxwell> or even with CMS if you don't assume that the user has some fancy cryptolibrary that is implementing CMS itself in anything but the naieve way. 13:01 < roconnor> perhaps it is worth considering relaxing or removing this rule when it comes to CMS. It has significant downsides. 13:01 < sipa> heh, i 13:01 < sipa> d i was thinking the opposite 13:01 < sipa> requiring all public keys to be parseable seems much more sensable 13:02 < sipa> even the "unused" ones 13:03 < sipa> though not touching the policy is probably easiest of all 13:04 < gmaxwell> the problem with all is that it potentially confiscates coins (if it's a consensus rule) 13:05 < gmaxwell> not just potentially, but would actually because of really f@#$ mornic things. 13:05 < sipa> yeah 13:05 < roconnor> yes, it definitely confiscates counts. 13:05 < roconnor> *coins 13:05 < roconnor> (as a consensus rule) 13:05 < sipa> sure, not consensus 13:06 < gmaxwell> and not just a couple of bits of dust here and there, there are thousands of outputs with garbage 'keys' in cms. 13:06 < gmaxwell> Unless its consensus a faster cms implementation can't how bad keys are handled. 13:07 < gmaxwell> so any kind of faster cms is stuck following the original behavior. 13:13 < roconnor> AFAIU the policy isn't really "these specific keys need to be parsible and these specific keys don't have to be parsible", but rather the policy is "whatever keys that our implemenation happens to parse need to be parsible", and if we change our implementation, then the specific keys that need to be parsed by the policy can change. AFAIU there doesn't need to be agreement on the details of policy between nodes, so we don't need to 13:13 < roconnor> worry about changing the details of the behaviour here (including dropping the whole parsibility requirement). 13:33 -!- belcher [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 14:28 < gmaxwell> it isn't that black and white. 14:29 < gmaxwell> if the change in policy makes something people are doing stop working, than thats a complication that has to be balanced. 14:44 < sipa> there is a reasonable argument that if we here have difficulty figuring out what the exact policy is, it's unlikely that someone is relying on those details 14:45 < sipa> but back to the question: i think the cleanest interface is a function that takes sigs, messages, and a list of pubkeys, and it finds the subset of matching keys, in order 14:45 < sipa> the return value can be a bitfield 14:46 < sipa> the logic on the bitcoin side can be pretty much kept as-is then, but instead of actual sig checks, it would inspect the bitfield 14:49 < gmaxwell> what happens if multiple keys match a signature? 14:50 < sipa> that's captured by the in-order matching in the function 14:50 < gmaxwell> so it has to keep executing even when it's sure it can't be successful or sure it will be successful? 14:51 < gmaxwell> ...thats likely to be slower in the average case than the current approach. 14:51 < sipa> i guess it can stop too, but it would still need to output bits for the already matched keus 14:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 14:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 14:57 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 14:58 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 15:04 < roconnor> (I think I'd rephrase what I said as: the current policy is that all pubkeys must be parsible, however we won't necessarily go out of our way to enforce this, but don't count on it.) 15:05 < gmaxwell> okay, I agree thats the 'advice' people should get. 15:05 < gmaxwell> but the network behaves a particuar way right now, and in terms of the "number of people who surprisingly had a bad day" metric, the way the network actually works is the most important. 15:06 < elichai2> gmaxwell: btw, I think we should also consider what performance increase are we trying to achieve, block verification(and IBD) or general me pool verification. Because block verification is usually a sunny day scenerio in terms of aborting (ie no early aborts) 15:08 < roconnor> If a new CMS implementation ends up with a strictly laxer implementation of the policy check, I think that would be considered safe to deply. 15:09 < elichai2> Sipa And I'm not sure I get the order rule you're talking about. Do you mean like two cursors that keep going forward on each pubkey and signature? Only on one of them? (can I have sig number 5 be valid for pubkey num 2? Can I have pubkey number 5 be valid for sig number 2?) 15:14 < gmaxwell> roconnor: yes. 15:15 < gmaxwell> roconnor: though you would also want keep in mind the motivation for the policy. Though I don't know if we ever actually encountered an implementation inconsistency for anything except hybrid keys. 15:16 < gmaxwell> elichai2: your list of possible objectives isn't the right list, I think. 15:17 < gmaxwell> elichai2: the interesting possible objectives are the worst case time a valid block can take to be accepted; or saving a couple minutes on initial sync. 15:17 < elichai2> I agree with that. And that's why I'm not bothered too much about losing performance when we can "abort early" 15:17 < gmaxwell> Acceptance time on non-confirmed transactions is that that awful in the worst case and can be addressed via standardness. Standardness cannot be used to tame the worst case of a block. 15:18 < sipa> elichai2: i think the goal shoh 15:18 < sipa> elichai2: i think the goal should be improving the worst case, but under the constraint of not worsening the average case (or at least not significantly) 15:19 < gmaxwell> elichai2: if it doesn't "abort early" in the sense of learning when the checksig will pass, but instead keeps going on to find all matches, then it will end up slowing down initial sync, perhaps considerably. 15:19 < elichai2> I thought more in the sense of failing early. How can it know early that it will pass? Isn't there a rule that you can't provide more sigs than the threshold? 15:19 < elichai2> (otherwise mellability etc.) 15:21 < gmaxwell> elichai2: earlier in the discussion sipa was suggesting a design where it had to return a bitmask of all matches-- and the function didn't even get the treshold, since one singnature can match multiple keys, that behavior would require more work than a version that only had to meet the threshold. 15:21 < gmaxwell> (and in particular, I think it would considerably slow a 2 of 3 validation.) 15:24 < elichai2> Hmm A. Is it actually feasible that accidentally a signature will match more than one of the keys there? (if not on purpose and not just the same key tiwce). B. We're only talking about the ge<->gej comparison? Unless we should already start comparing before computing all possibilities? (ie in my code do the comparisons inside of the _four function) 15:26 < gmaxwell> elichai2: it's trivial to construct inputs intentionally where more than one key gets matched, if the interface is defined to always return all the matches then you have to do more work in common cases to find all of them. 15:26 < elichai2> When I think about it B will also improve the memory requirements a bit. 15:27 < gmaxwell> but thats stupid because the actual behavior needs to only pay attention to the first match or its consensus inconsistent. 15:29 < elichai2> gmaxwell: well with my current design it will only be about the ge<->gej cmp which in my benchmarks aren't that much time. (almost all the time is in ecmult) but if we violate the current design by moving the cmp into the calculations and returning success early then yeah it can improve performance even more (statistically probably 50% of the time I think) 15:29 < elichai2> About consensus I currently thought more like what sipa was talking about. Just give the correct keys back and let core take care of the exact consensus means 15:30 < gmaxwell> sipa: also, it's important to be specific that its the worst case in terms of work per byte, so e.g. making the worst case of 2 of 3 faster isn't relevant if its never on the worst case frontier. 15:30 < sipa> gmaxwell: right, that's why i say the goal should be improving the worst case 15:30 < sipa> which is at this point clearly 1-of-20 15:31 < gmaxwell> sipa: right but I think that if you iteratively apply the logic, e.g. once 1 of 20 is done, repeat... you never get to 2 of 3. 15:32 < gmaxwell> because the optimized work per byte for some other threshold is already worse than the naieve 2 of 3 worst case. 15:32 < gmaxwell> which is important, because right now elichai2's approach looks like a substantial performance hit for 2 of 3 in practice. 15:33 < gmaxwell> and certantly for 2 of 2.. 15:33 < gmaxwell> and those are the two most common cases. 15:33 < elichai2> 2of2 this will never be better 15:33 < gmaxwell> So long as the performance of those cases isn't hurt, the average performance won't meaningfully be hurt. 15:34 < elichai2> About 2of3 you argue that most tx out there put the sigs for the first 2 keys? (ie only 2 verifications actually happen, and not 3) 15:34 < gmaxwell> elichai2: I don't think returning the keys is good layering at all, because it exposes calling code to really surprising cryptographic vulgarities (that there can be a radically different number of potentially matching keys, for obscure reasons); but also it's not good for performance because it means you have to do extra work (including modular inversions) to return keys instead of matching them. 15:35 < gmaxwell> elichai2: I think so, but I think given your numbers even if the missing key is uniformly distributed, it'll be a slowdown on average. (because the current code doesn't need to run the third validation 1/3rd of the time) 15:35 < elichai2> gmaxwell: so you prefer to return an index on first success? 15:36 < gmaxwell> I think it should probably just internalize the consensus behavior of CMS (whithout the policy stuff). then relax the policy restriction for CMS. 15:37 < gmaxwell> so you'd feed it N keys, M sigs. and it would return true or false if the CMS consensus logic would return true or false. 15:37 < elichai2> gmaxwell: too late here to actually do the Calc for the timing of 2/3 on average (ie this approach a bit faster 2/3 of the times but way slower 1/3) lol 15:37 < gmaxwell> and internally it should decide to use recovery or ordinary validation as it sees fit. 15:38 < elichai2> That would make things easierb 15:38 < elichai2> Because then we can easily both fail early if needed and succeed early if possible 15:39 < gmaxwell> since AFAIK we've never actually encountered a case where implementations were inconsistent in their public key decoding (Except for hybrid keys, which the standardness rules don't protect bitcoin from) it wouldn't be a big deal to drop that policy behavior. 15:40 < gmaxwell> elichai2: I also gave an algorithim yesturday that should be faster... where it switches from recovery to ordinary validation once ordinary validation will be faster. E.g. say you have a 10 of 20. You do recovery and find out the first signature matches the 10th key. Great. Now you should do orindary validation for the next 10 signatures. 15:40 < gmaxwell> er next 9. 15:40 < elichai2> Ha 15:41 < elichai2> That's 100% specific for cms, right? 15:41 < gmaxwell> basically that algo is just to look at the threshold, decide if recovery is faster. If so use it, then remove that signature and key and go to step 1. 15:41 < elichai2> What's hybrid keys? 15:41 < gmaxwell> Yes, it depends on the CMS skipping behavior. 15:41 < gmaxwell> elichai2: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2012-June/001578.html 15:42 < gmaxwell> elichai2: the bad space usage of uncompressed keys plus the extra complexity of compressed keys, all in one helpful nearly undocumented format! 15:42 < gmaxwell> bbl 15:42 < elichai2> Btw I think succeeding early can only give us half a addition skip in almost all cases. Not sure it's worth it 15:43 < elichai2> gmaxwell: libsecp supports that? Don't think I've heard of it lol 15:43 < sipa> elichai2: it does, because it must 15:43 < elichai2> So if you OR both compressed flag and uncompressed flag that's what you get? 15:43 < elichai2> Weird 15:44 < sipa> elichai2: right, never realized that 15:44 < sipa> but that's exactly it 15:45 < sipa> openssl supports it 15:45 < roconnor> I'm guessing it is just part of OpenSSL's cavalier additute to validating inputs. As in, if the input data is illformated then we can do whatever we'd like that makes our processing faster. 15:45 < elichai2> Ha. 0x7 and 0x6 pretty sure it doesn't exist in SEC or any other paper 15:45 < sipa> indeed 15:45 < elichai2> (as my naive simple impl is based on those papers and doesn't support it iirc) 15:46 < sipa> roconnor: what do you mean it should be inacceptable to encode an integer as a 7-level-deep data structure as long as its first field is an integer? 15:47 < elichai2> Anyway I should go to sleep. If there are any more interesting conclusions would love to hear them in the morning and I'll try to document some of this in the PR 15:50 < elichai2> And I should really investigate and learn the exact behavior of (consensus) CMS. 15:51 < sipa> elichai2: i'm afraid you'll have to 15:55 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Read error: Connection reset by peer] 15:57 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #secp256k1 16:16 -!- jonatack [~jon@2a01:e35:8aba:8220:6627:dad:d967:649d] has quit [Ping timeout: 246 seconds] 16:19 < gmaxwell> elichai2: there actually is a spec that has the hybrid stuff in it. 16:19 < gmaxwell> someone somewhere implemented it intentionally, god knows why. 16:19 < roconnor> The existence of such a spec is incredible... 16:20 < gmaxwell> you could imagine it to be a useful storage format for a system that might be expected to hand out keys in both compressed an uncompressed forms. 16:22 < gmaxwell> as you can cheaply extract either from it. (keep in mind, in some groups, particularly ones with characteristic 2 fields, extracting the compression flag isn't quite so simple as it is for prime fields, it requires a trace operation) 16:23 < gmaxwell> also keep in mind that certicom had patents on the compressed keys, at least for characteristic 2, which they used to (falsely) claim to have patents on compressed keys in general, which probably also encouraged weird behavior by implementers. 16:28 < gmaxwell> e.g. looking at the openssl docs: 16:28 < gmaxwell> -conv_form 16:28 < gmaxwell> This specifies how the points on the elliptic curve are converted into octet strings. Possible values are: compressed (the default value), uncompressed and hybrid. For more information regarding the point conversion forms please read the X9.62 standard. Note Due to patent issues the compressed option is disabled by default for binary curves and can be enabled by defining the preprocessor macro 16:28 < gmaxwell> OPENSSL_EC_BIN_PT_COMP at compile time. 16:29 < gmaxwell> so it might just have been someone's pithy idea of a patent workaround "we'll implement hybrid and then the user can convert to compressed and we'll be none the wiser" 16:30 < gmaxwell> oh btw the libsecp256k1 source gives a spec citation for hybrid. 16:30 < gmaxwell> src/tests.c: /* Check that the X9.62 hybrid type is checked. */ 16:31 < gmaxwell> "NOTE-- The hybrid form contains information of both compressed and uncompressed forms. It allows an implementation toconvert to either compressed form or to uncompressed form." 16:31 < gmaxwell> Gee thanks guys: "NOTE-- If hybrid form is used, an implementation may optionally check that ypand yp are consistent (see steps 5.2.1 and5.2.2)." 16:31 < sipa> optionally ?! 16:32 < gmaxwell> I mean, you _know_ thats how implementations were going to treat it (optionally) no matter what the spec says. :P 16:32 < gmaxwell> it's not even just boring old crap that is inconsistent with what it accepts and rejects, ed25519 implementations are too. 17:38 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 17:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 17:44 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 276 seconds] 17:59 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has joined #secp256k1 17:59 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 245 seconds] 18:05 -!- lukedashjr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 265 seconds] 18:32 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #secp256k1 20:06 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 276 seconds] 20:12 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has joined #secp256k1 20:54 -!- luke-jr [~luke-jr@unaffiliated/luke-jr] has quit [Ping timeout: 268 seconds] 23:50 -!- jonatack [~jon@2a01:e35:8aba:8220:6627:dad:d967:649d] has joined #secp256k1 --- Log closed Wed Oct 23 00:00:44 2019