--- Log opened Mon Oct 21 00:00:40 2019 02:32 -!- cfields [~cfields@unaffiliated/cfields] has quit [Ping timeout: 240 seconds] 08:35 < elichai2> Andytoshi isn't here but maybe someone else might have an answer for me. How feasible do you think it is to model bitcoin's blockchain verification into a set of polynomial constraints? 08:35 < elichai2> Such that I can make a zk proof for the utxo set 09:01 < sipa> if you mean an arithmetic circuit... a sha256 compression function run is 25600 gates 09:25 < elichai2> Honestly I'm not exactly sure what are arithmetic circuits in practice (I get the theoretic idea of gates but what are they in practice). 09:25 < elichai2> I'm playing with the idea of a light wallet that uses STARKs, from what I've been playing with for now STARKs requires writing polynomial constraints on the inputs. So I'm not sure what's the relationship between polynomial constraints and arithmetic circuits 09:26 < sipa> ah, i believe STARKs rely on repeated application of a (sinlge) polynomial relation 09:27 < sipa> while arithmetic circuits require unrolling the whole thing and writing it as linear functions and multiplication gates 09:28 < elichai2> I want to pitch to starkware the idea of doing a bitcoin light wallet, but first I need to figure out if it's even feasible lol 09:28 < elichai2> I think that will be an interesting experiment for bitcoin with ZKproofs and might be good now that SPV are dying 09:29 < sipa> it would be infeasible 09:31 < elichai2> Really? Hmm 09:32 < sipa> you'd need to prove validity of the entire blockchain that includes your transaction, right? 09:32 < elichai2> Sadly the interpreter is a bit too convoluted 09:32 < elichai2> Hmm I thought more on the way of proving the utxo set 09:33 < sipa> what's the point? 09:33 < elichai2> That there will be a centralized server that will produce zk proofs for the utxo set. And then people sending you money you could verify that their tx is valid and uses a utxo from your set 09:34 < sipa> how can they verify the tx is valid? 09:34 < sipa> put otherwise: what attacker are you protecting against? 09:35 < elichai2> Valid signature spending a TX from the proved utxo set 09:35 < sipa> but you don't know that the UTXO set is valid 09:35 < elichai2> And then after a confirmation now that tx will be in the new proved utxo set 09:35 < elichai2> That's where the proofs comes in 09:36 < sipa> you know a blockchain exists which produced a utxo set, and your utxo is in it 09:36 < sipa> but if you don't prove that that blockchain is valid, what have you gained? 09:37 < elichai2> I don't quite understand what do you mean. I know a blockchain exists which produced a utxo set that I have, now I can check what transactions are in the utxo set 09:37 < elichai2> And which of those are mine 09:37 < sipa> yes, but what attacker are you protecting against? 09:38 < sipa> what problem are you trying to solve 09:38 < elichai2> I'm trying to solve the problem of knowing what transactions I own that are included in the longest chain 09:39 < sipa> so, SPV level security? 09:40 < elichai2> Well if the proof can contain also the validity of the chain and not just the right difficulty than that should be better than SPV 09:41 < sipa> you can't prove the entire validity 09:41 < sipa> except maybe using some recursive zero-knowledge proof system 09:41 < elichai2> Because it's too complicated? 09:41 < sipa> but even that seems very much a stretch 09:41 < sipa> yeah, just computationally infeasible 09:43 < elichai2> Are there any proofs system that are updatable? Such that we can increment the proof with every block without needing to regenerate it? 09:43 < sipa> with a recursive proof system you can prove validity incrementally (you create a proof that says "i know of a proof for a previous block, which when fed to a verifier succeeds, and there is a public block X that satisfies all rules, and its predecessor is that valid block" 09:44 < elichai2> Oh right 09:44 < sipa> you run the proof verifier for the previous statement as part of the next statement 09:44 < sipa> but even a proof for a single bitcoin block that includes all rules is infeasible i think 09:45 < sipa> for the time being 09:47 < elichai2> Do you think the hardest part would be the model the interpreter? 09:48 < sipa> and the utxo set 10:21 < elichai2> https://twitter.com/benediktbuenz/status/1186320383961513986?s=19 10:23 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 10:28 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 264 seconds] 10:47 < sipa> they rely on class groups, or trusted rsa setup though 10:56 -!- sipa [~pw@gateway/tor-sasl/sipa1024] has quit [Remote host closed the connection] 10:56 -!- sipa [~pw@gateway/tor-sasl/sipa1024] has joined #secp256k1 11:00 < gmaxwell> elichai2: state of the art prover systems are equivilent to a microprocessor running at a few hz... that sort of thing, "wouldn't it be nice if you could ZKP the whole chain validity" is an old thought which has been discussed many times (see #bitcoin-wizards logs) but it's not remotely within the realm of praticality, even in 'more efficient' proof systems. 11:02 < elichai2> gmaxwell: I'll try to search through the logs (someone should really index them hehe) I more thought of it with a central system, not a decentralized consensus that is based of that 11:02 < gmaxwell> (also a number of the more efficient no-trusted-setup systems have linear time verification, which make them mostly pointless for chain history, since the whole purpose of doing it isn't secrecy but saving resources) 11:03 < elichai2> but I had the feeling it's still infeasable. and I think that unless we move to a more concrete interperter(maybe simplicity helps here?) then it might never be feasable 11:04 < gmaxwell> script has a fairly low upper computational bound. I doubt its the most concerning element. 11:05 < gmaxwell> (esp since unusual scripts are so rare you could skip their validation, copy their data into a seperate data structure, and just send it seperately to be validated by the client without any big change in efficiency) 11:06 < elichai2> btw I think STARKs are log verification 11:06 < elichai2> or something similiar 11:08 < elichai2> gmaxwell: btw, roconner told me you might be interested in benchmarking this with more real life scenerios :) https://github.com/bitcoin-core/secp256k1/pull/658 if you do, I might push an update soon that will decrease it in another 1-2 precent 11:10 < gmaxwell> elichai2: yes, but the constant factors in stark performance make them currently unrealistic for big problems. 11:11 < gmaxwell> (like if you look at the stark paper their ability to benchmark was constrained by their test system only having 1TB of ram or something like that, yet the expressions they were testing were all still essentially toy problems compared to validating a block) 11:12 < gmaxwell> (and all that was also only at a 60-bit security level...) 11:15 < gmaxwell> elichai2: I'm surprised you're not passing in the public keys? 11:16 < elichai2> That's the 1-2 prevent increase I'll push 11:16 < elichai2> It doesn't increase as much as I've hoped for :/ 11:18 < gmaxwell> in any case, this sort of thing should probably implemented with an interface that just takes a message, a count N, N pubkeys, N signatures... and then how it's implemented behind the scenes is invisible to the caller. 11:19 < gmaxwell> only 1-2% to eliminate N non-GMP inverses? seems surprising to me. 11:20 < gmaxwell> elichai2: aside, are you batching the scalar inverse? 11:23 < elichai2> Yeah we debated if it should support more than one message or not 11:23 < elichai2> gmaxwell: yes https://github.com/bitcoin-core/secp256k1/pull/658/files#diff-b04459e37839cd223176618536295715R225 11:25 < gmaxwell> In checkmultisig IIRC there is only one message, but also IIRC the one message case can be faster, because it's only in the case that you can use the recovery to skip inapplicable keys. 11:26 < gmaxwell> e.g. when you have a 2 of 3 CMS and it's the first or second key that isn't providing a signature. 11:27 < elichai2> using 1 message won't increase the speed though 11:28 < elichai2> what do you mean by skip? 11:28 < elichai2> fail early? 11:28 < gmaxwell> no, it doesn't fail. 11:30 < gmaxwell> For example: for a two of three cms the network provide three pubkey, two signatures, one message. The verifier is required to verify as if it tries each pubkey,signature in order until it meets the criteria. So if you signed with the second and third pubkey it will fail, pass, pass. 11:31 < gmaxwell> the idea motivating using recovery is that you avoid that fail, by recovering the pubkey and then looking it up in the list. 11:31 < elichai2> until now I follow :) 11:33 < elichai2> this is what I do. I get N pubkeys, N signatures and I return pointers to the valid pubkeys from the list per signature. and then the core side needs to check that every the order and the amount of sigs with valid keys 11:33 < gmaxwell> that isn't the problem in bitcoin. 11:33 < elichai2> but the amount of messages doesn't matter for that 11:34 < gmaxwell> There are N pubkeys, M signatures, M<=N, and 1 message. You don't know which pubkey goes with which signature. 11:34 < gmaxwell> Except they're required to be in order. 11:34 < elichai2> so if we can generalize this function without sacrificing speed than why not 11:34 < sipa> welll... they could be different sighashes 11:34 < gmaxwell> sipa: okay, true but this optimization wasn't expected to be interesting if they were. 11:35 < sipa> ah, right 11:35 < gmaxwell> If you have N pubkeys, N signatures, N messages, I don't think there is any reason to use _recovery_ -- though a batch interface could still be faster due to sharing inversions. 11:35 < elichai2> well you can implement something like CMS using that 11:36 < elichai2> but not just bitcoin's CMS 11:36 < gmaxwell> The motivation behind recovery is the fact that you don't know which pubkeys go with which signatures in checkmultisig. 11:36 < elichai2> you can make a threshold with any order 11:36 < sipa> gmaxwell: the optimization is still interesting when the messages are distinct, i think? 11:36 < gmaxwell> elichai2: using what? I'm not sure which comment you're referring to. 11:36 < sipa> at least when M is sufficiently smaller than N 11:37 < sipa> as every sig has exactly one msg 11:37 < sipa> so you run recovery on M sig/msg pairs, and match them against the N pubkeys 11:37 < elichai2> gmaxwell: using the batch recovery trick. you can have weird threshold protocols even if the messages aren't exactly the same 11:37 < sipa> which is O(M) work 11:38 < elichai2> exactly what sipa said 11:38 < sipa> elichai2: i have no clue what you're talking about 11:38 < sipa> you don't even need batch recovery for this (but that makes it slightly better still) 11:38 < elichai2> i'm just saying that there might be use cases for doing this even if you have more than one message 11:38 < elichai2> of course you don't *need* it 11:38 < elichai2> it's just a speed increase 11:39 < sipa> sorry, i was confused by "threshold protocols" 11:39 < gmaxwell> sipa: Ah, I thought there would be a problem where you could return an inconsistent result, but your right, the messages are paired with the signatures not the pubkeys, so yes. that works. 11:39 < sipa> i thought you were suddenly talking about something schnorr 11:39 < elichai2> oh lol no not schnorr 11:39 < sipa> if there is only one message i think you can do something even better 11:40 < gmaxwell> So indeed, in any case-- then the interface is N pubkeys, M signatures, M messages. And it returns true if all the signatures match against a pubkey in the list, and they're all in order. 11:40 < sipa> ECDSA's equation can compactly be written as sR = mG + rP 11:40 < gmaxwell> but it's still more efficient with one message. 11:40 < sipa> (ignoring the field/scalar conversion) 11:41 < sipa> so P = (mG - sR)/r 11:41 < sipa> if you precompute mG you save ~1 EC multiplication per recovery 11:41 < sipa> oh, no, nvm 11:41 < elichai2> gmaxwell: another topic. #667. i'm redoing that without the asm magic, and it still faster, I think the optimizer does some loop unrolling which affects the result. I think we might still want the asm magic even if we make sure to use the result of each operation. otherwise it will unroll and probably show faster numbers than the actual time the tested function takes 11:41 < sipa> it's still 2 EC multiplication 11:42 < elichai2> yeah we multiply m*(r^-1) and only then by G 11:46 < gmaxwell> elichai2: I don't really understand your claim that the numbers you posted are faster for 2 of 3. A lot of the time 2 of 3, the missing key is the third one. So the normal verification is pass/pass, with no failure. Other than by batching the scalar inverse you couldn't make that case faster. 11:47 < gmaxwell> I think the numbers you give are only faster for the two cases where the missing signature is in the first or second position. 11:47 < elichai2> hmm you're right 11:47 < elichai2> I haven't considered the ordering 11:47 < sipa> if you want to improve the worst case, that's what matters 11:47 < sipa> if you want to improve the average performance perhaps not 11:48 < sipa> but it's not unreasonable to focus on improving the worst case, if it doesn't come at a great cost to the average case 11:48 < gmaxwell> well worst case is like a 1 of 20. which this would obviously improve. :P 11:49 < gmaxwell> I don't think the 'worst case' of a 2 of 3 though is interesting. Say someone starts bombing the network with 2 of 3 multisigs where it's always the first key missing. So what? :P 11:49 < sipa> fair 11:50 < elichai2> btw, historic data for multisig usage a friend gathered for me https://pastebin.com/fyC0yuv7 11:50 < elichai2> by his data 9.2% of all historic utxo are multisig 11:50 < gmaxwell> yea, you really need to know the positions of they failures to analyize the average case performance of this. 11:51 < elichai2> yeah, that's a great point 11:51 < gmaxwell> In terms of worst case performance, though you know it. 11:51 * gmaxwell out & 11:51 < elichai2> I need to look at the CMS code to check that we can easily check which is missing 11:52 < sipa> elichai2: you could instrument the CMS verification code to just count how many actual signature checks are performed 11:52 < sipa> maybe broken down by k/n 11:53 < elichai2> yeah but how do you check which are missing, the first or the last 11:53 < elichai2> because as gmaxwell said if it's ordered as best case scenerio than we can't improve on that 11:53 < sipa> that's irrelevant 11:53 < sipa> the question is how many signature checks are performed 11:53 < sipa> which is a function of which keys are missing 11:54 < sipa> but the actual thing you're interested in is how many signature checks happen in practice 11:54 < sipa> like you could build all kinds of statistics about which keys are missing... and then end up aggregating them by trying to infer how many signature checks that imples per OP_CMS 11:55 < sipa> or you can measure the number of signature checks directly 11:55 < gmaxwell> you just logprint "%d of %d %d",k,n,number_of_times_verify_got_called 11:55 < sipa> exactly 11:55 < elichai2> but only for verify as part of CMS 11:55 < gmaxwell> right. 11:56 < gmaxwell> which is easy, since its right there inside cms handling bloc. should be a one line change. 11:56 < elichai2> i'll modify the code and reindex hopefully this week lol (I really need a server for these kind of stuff :/ ) 11:56 < elichai2> it will be interesting to do it with assumevalid=0 but it won't represent what happens in practice 11:56 < gmaxwell> if you want to measure the whole chain you'll need to -assumevalid=0 though IMO it's much more interesting to just run this on recent blocks. 11:57 < elichai2> lol :) 11:57 < gmaxwell> since behavior in the far past is kind of irrelevant. 11:58 < gmaxwell> esp since it won't get run there for ordinary user syncs, it would be perfectly fine to make the average slower for the far past. Less obviously fine to make the average slower for current blocks. 11:58 < gmaxwell> unrelated it may be useful to benchmark the speedup from the shared scalar inverse alone, since e.g. 2 of 2 would be fastest using an ordinary verification but batching the scalar inverses. 11:59 < gmaxwell> now I actually need to leave 12:00 < gmaxwell> (but that 2 of 2 case is one of the reasons I prefer an implementation independant interface, e.g. it could silently decide to recover in the background based on the input... if N=M you wouldn't ever want to recover.... 12:00 < elichai2> well with gmp it's probably almost nothing but we don't compile with gmp in practice 12:00 < elichai2> *by default 12:13 < elichai2> what's the point of the addition here? https://github.com/bitcoin-core/secp256k1/blob/0d9540b13ffcd7cd44cc361b8744b93d88aa76ba/src/bench_internal.c#L247 12:15 < sipa> elichai2: it's a variable time algorithm, and you want to measure average behavior, not one specific instance 12:15 < sipa> so you have to vary the problem 12:15 < elichai2> ohhh 12:48 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 12:56 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 16:40 < gmaxwell> elichai2: batch inversion is still a lot faster than the gmp inversion... it's just gmp inversion is only like 2% of the verification time instead of 20% or whatnot. 16:41 < sipa> i don't think the difference is that big 16:41 < sipa> but yeah 16:48 < gmaxwell> I think you can put your multiple recoveries to a common z. by taking the products of all the other zs. 16:50 < gmaxwell> guess that being a win would depend on how close n and m are to each other. 16:51 < gmaxwell> irritatingly, if someone has 0x04 style keys, you still have to validate their Y's. 16:57 < sipa> The cost of this approach is that you need 2 independent EC multiplications per signature, rather than 1 multi-mult of 2 multiplications per public key 16:59 < sipa> so it probably works with k < 2/3 n or so 17:00 < gmaxwell> sipa: I believe I've linked here before a function for the x-only comparison needed for a 2 of 2 using a single multimult. (the x only comparison is expensive by far less than another exponentation) -- seems 2 of 2 is pretty common in those figures posted earlier. 17:08 < gmaxwell> sipa: 2 independant ec multiplications? hm. does dettman's x-only trick buy anything there? 17:09 < sipa> P = (m/r)G +- (s/r)R 17:10 < sipa> so if you multiply (m/r)G and (s/r)R independently you can compute both their sum and their subtraction 17:12 < gmaxwell> (re, batch ecdsa: https://cse.iitkgp.ac.in/~abhij/publications/ECDSA-SP-ACNS2014.pdf might be the paper I linked before) 17:17 < sipa> so a multiplication with G costs 40us, a multiplication with an arbitrary point costs 50us, and both combined costs 57us 17:17 < sipa> so you'd need k < 0.63 n 17:17 < sipa> plus you need an extra point decompression for R 17:19 < gmaxwell> sipa: you don't, however, need a point decompression for P. 17:20 < sipa> how do you compute (s/r)R without decompressing r? 17:20 < gmaxwell> You decompress R, but not the input public keys. 17:21 < sipa> ah! 17:21 < sipa> i parsed your sentence as "you don't [need to decompression R], however, [you] need a point decompression for P." 17:21 < gmaxwell> (you might need two decompression's for R unfortunately, but you can try the second one only if the first one gets you no hits) 17:22 < gmaxwell> I guess. 17:22 < sipa> you only need two decompressions if r < p-n 17:22 < sipa> which is ~never the case 17:23 < gmaxwell> with sighash single bug a clown can compute ones that are! :) 17:23 < sipa> yeah ok 17:23 < gmaxwell> (also with the pubkey in the signature) 17:23 < sipa> can we softfork those out? 17:23 < sipa> the clowns 17:24 < gmaxwell> I think we can softfork out single bug, but I think someone could still trigger that case using pubkey-in-signature. 17:24 < sipa> right 17:24 < sipa> i was jokingly suggesting we softfork out clowns 17:24 < sipa> but this does worsen the worst-case analysis 17:27 < gmaxwell> in terms of worst case attacks, I think the concern is just someone that has few witness bytes but does lots of verification time for a valid sig? like a bunch of 1 of 20 multisigs with the match being the last one tested? 17:27 < gmaxwell> for cases like few of many this kind of approach is a big win, no matter what microoptimizations are implemented. 17:28 < sipa> fair 17:28 < gmaxwell> (I sure wish we hadn't required the drop element to be zero, and instead required it to be a bitmask of matches. :P 17:28 < gmaxwell> ) 17:29 -!- roconnor [~roconnor@host-104-157-204-21.dyn.295.ca] has joined #secp256k1 17:31 < gmaxwell> also why the hell are there a lot of 2-of-6? that seems like a not very obviously useful policy. :P 17:31 < roconnor> I only skimmed the scrollback, and correct me if I'm wrong, but in CMS each signature can have a different sighash, so it isn't guarnteed that every message signed will be the same. 17:32 < gmaxwell> roconnor: you're right, I forgot about that, sipa corrected somewhat later. 17:32 < gmaxwell> Usually they're the same... it sounds like it doesn't matter in any case. 17:33 < gmaxwell> (by doesn't matter I mean there isn't in fact a useful speedup from them being the same) 17:34 < roconnor> yes, that is my understanding. 17:45 < sipa> indeed 17:45 < gmaxwell> it's annoying how much speed loss there is from the loss of the multiexp. 17:46 < gmaxwell> I guess endo would help some, since the loss of the multiexp shifts costs to doubling. 17:53 < gmaxwell> sipa: so interestingly, I think in lots of cases you can get a nice speedup from doing something like: Step 1. Check K of N to see if recovery based validation would be a win. If not, use ordinary validation, otherwise: Flip a coin, recover the first or the last signature. Match it. Exclude the earlier, later part of the list. Go to step 1. 17:54 < roconnor> oh interesting. 17:54 < gmaxwell> in a lot of cases, the single recovery lets you throw away enough entries that ordinary validation is faster. The random choices mean that the case where you always make the wrong one is exponeitally unlikely, moving the worst case closer to the average case time. 17:55 < sipa> why specifically first or last? (as opposed to just a randkm one) 17:57 < gmaxwell> well they have the maximum number they could potentially eliminate. 17:58 < sipa> every sig needs to be valid 17:58 < sipa> for some key 17:58 < gmaxwell> I wasn't assuming that the signature was necessarily valid. 17:58 < sipa> sure 17:59 < sipa> oh, there are annoying validity rules that cause failure to parse a pubkey to result in script failure 17:59 < gmaxwell> If you assume it's valid, I don't think it matters which order you go in... but the point remains though that if you've eliminated some it eventually becomes faster to use ordinary alidation. 17:59 < sipa> but only if the normative algorithm ever needs that oubkey 17:59 < gmaxwell> sipa: sure, but thats easy enough to handle. 18:00 < gmaxwell> You just track where the matches were and check if the normative algo would have looked there. 18:00 < gmaxwell> also when you read the pubkeys, you can drop all keys after a deserialization failure. 18:01 < gmaxwell> ah, you still though need a tri-state falure condition which is annoying. 18:12 < roconnor> you can drop keys after a deserialization failure? 18:13 < gmaxwell> when it fails, that key nor any after it can be accessed by a CMS in a valid transaction. 18:13 < gmaxwell> (this is also not an obscure corner case, there are a bunch of signatures out there with garbage 'keys' in them.. though I'm not sure all of them fail to deserialize. 18:13 < gmaxwell> ) 18:14 < gmaxwell> I think most of the 1-of-2 in the earlier linked stats is really just a 1 of 1 with 65 bytes of garbage. 18:15 < roconnor> So there is a consensus critical directionality to the CMS verification? 18:16 < gmaxwell> yes, more than one. the signatures need to be in pubkey order.... they're just permitted to skip. 18:16 < sipa> yes 18:17 < sipa> i think you can do the parsing as a preprocessing step, and just replace the first invalid one and all subsequent ones with dummies 18:17 < gmaxwell> like if you have a 2 of 3 andthe first signature is a valid signature for the last pubkey, then the CMS returns false regardless of what the other two signatures are. 18:17 < sipa> ah, no 18:18 < sipa> indeed, what gmaxwell says 18:18 < sipa> but if the first key is unparseable instead, the script will abort 18:20 < gmaxwell> (the ordered requirement is also why it's pretty unlikely that recovery based checking can't win for _average_ performance unless the number of threshold is quite a bit smaller than the number of keys.) 18:31 -!- roconnor [~roconnor@host-104-157-204-21.dyn.295.ca] has quit [Read error: Connection reset by peer] 18:32 -!- roconnor [~roconnor@host-104-157-204-21.dyn.295.ca] has joined #secp256k1 18:34 < roconnor> oh man. I always though you could run the sig-list and pubkey-list in either direction (as long as they are the same direction). 18:35 < roconnor> What's the definition of an unparsable pubkey? 18:35 < roconnor> It has to be oncurve? 18:38 < gmaxwell> No. I believe the requirement is that it has t obe it has to be 33 or 65 bytes and begin with an appropriate byte. 18:38 < gmaxwell> (might even only be the length thats required) 18:38 < roconnor> I'm sure I haven't implement this correctly in Haskell. 18:38 < roconnor> or at least never considered it. 18:51 < sipa> i believe that's exactly iy 18:51 < sipa> no hybrid keys 18:51 < sipa> size must match 18:51 < sipa> and prefix must be 2 3 or 4 18:53 < roconnor> really no hybrid keys? 18:54 < roconnor> is this segwit v0 rules or all script? 18:58 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has joined #secp256k1 18:59 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 18:59 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:09 < sipa> bip66 19:09 < sipa> it predates segwit, i think? 19:09 < sipa> or am i misreading 19:09 < sipa> i think i am 19:09 < sipa> this is just a standardness rule 19:10 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 19:10 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:10 < roconnor> right, there were a bunch of standardness rules about parsing of pubkeys. 19:10 < roconnor> I wasn't aware of consensus rules. 19:10 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 19:11 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:16 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 250 seconds] 19:21 < gmaxwell> in segwit it's consensus, no? 19:25 < sipa> i don't think so, actually 19:26 < sipa> we have the only-conpressed standardness rule in segwit though 19:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:39 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 19:40 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:41 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:42 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Read error: Connection reset by peer] 19:43 -!- ddustin_ [~ddustin@unaffiliated/ddustin] has quit [Remote host closed the connection] 19:44 -!- ddustin [~ddustin@unaffiliated/ddustin] has joined #secp256k1 19:48 -!- ddustin [~ddustin@unaffiliated/ddustin] has quit [Ping timeout: 265 seconds] 20:06 -!- roconnor [~roconnor@host-104-157-204-21.dyn.295.ca] has quit [Ping timeout: 240 seconds] --- Log closed Tue Oct 22 00:00:43 2019