--- Log opened Thu Jan 27 00:00:41 2022 02:37 -!- calvinalvin [~kcalvinal@ec2-52-79-199-97.ap-northeast-2.compute.amazonaws.com] has joined #secp256k1 02:38 -!- kcalvinalvin [~kcalvinal@ec2-52-79-199-97.ap-northeast-2.compute.amazonaws.com] has quit [Ping timeout: 256 seconds] 02:38 -!- cfields [~cfields@user/cfields] has quit [Ping timeout: 256 seconds] 02:39 -!- cfields [~cfields@user/cfields] has joined #secp256k1 04:57 < andytoshi> so BlueMatt and i are talking about automatic re-randomization in rust-secp ... in the case that the user has a dependency on the `rand` crate, we are able to quite ergonomically/cheaply/reliably access shitty randomness, so we are thinking about re-randomizing when contexts are created and also after every signature 04:57 -!- robertspigler [~robertspi@2001:470:69fc:105::2d53] has joined #secp256k1 04:58 < andytoshi> my quick read of the rerandomization code in ecmult_gen_impl is that this involves an ecmult_gen ... so this'll nearly double the signing time. is that correct? 04:58 < sipa> what will? 05:04 < andytoshi> sipa: re-randomizing after every signature 05:04 < sipa> Oh we wouldn't do a full rerandomization! 05:04 < andytoshi> oh? i'm glad i asked here :) 05:05 < sipa> IIRC there was some idea of using a single bit that was produced (e.g. sign of initial nonce before fixing its parity) to make one of two transformations to the randomization 05:06 < andytoshi> oh that's really interesting 05:07 < sipa> I can't remember what the idea was or where it was discussed, but I could imagine using it to choose between either adding a constant to it, or doubling it. 05:07 < andytoshi> yep, that make sense 05:08 < andytoshi> ok, so for now i think i'm still going to propose doing a full re-randomization after each sig, and making users opt out of this, and leaving it as a TODO to investigate this one-bit-rerandomization idea upstream 05:09 < sipa> It's a huge API change, though. 05:09 < sipa> Because it requires non-const contexts. 05:09 < andytoshi> ahh shit yes 05:10 < sipa> Possibilities are either just having separate API functions that permit rerandomization 05:10 < andytoshi> hmm, in our case we could hide a mutex inside the context object, so they could appear to be const from an API point of view 05:10 < andytoshi> but that might be even worse, as it'd introduce surprising blocking behavior 05:10 < sipa> C89 has no concept of threads 05:10 < andytoshi> right, sorry, i'm talking about the rust-secp API again 05:11 < sipa> oh, sure 05:11 < andytoshi> but i see -- even doing the one-bit thing would require non-const contexts 05:11 < sipa> you can do in the wrapper of course 05:11 < andytoshi> in C 05:12 < sipa> Another possibility is having a context flag that means "ignore the (old) API's constness, this context is actually only ever accessed from a single thread so you can treat it as mutable". 05:13 < andytoshi> that's tempting, though it may be too unergonomic for users to use safely 05:13 < andytoshi> and i also don't think bitcoin core could use it 05:13 < sipa> indeed, but it's not too hard to give it just a separate context for each thread 05:13 < andytoshi> sipa: my proposal, for both rust and C, might be to make the default _sign() function take a non-const context pointer, and to add another _sign_no_rerandomize() one which takes a const context pointer. (For the C API, where we are extremely loath to change the API on non-experimental things, maybe we have to invert this and have a separate sign_randomize(), but ok, in rust-secp we'll use that) 05:14 < andytoshi> sipa: yeah, that's true, especially since cloning contexts is so cheap (time-wise at least) 05:17 < sipa> There is some discussion in #881 05:19 < sipa> Oh, that's actually something separate, and much simpler. 05:20 < andytoshi> oh neat 05:20 < sipa> So we have two blinding mechanisms: the initial point offset, and the random z rescaling of it. 05:20 < sipa> Rerandomizing the z rescaling can be done trivially almost without mutable context. 05:21 < sipa> It's not quite as strong as the offset blinding though, because that one also affects the scalar computations. 05:21 < andytoshi> opened tracking issue on the rust bindings. https://github.com/rust-bitcoin/rust-secp256k1/issues/388 i didn't cc you because i didn't want to overwhelm your github inbox 05:21 < andytoshi> sipa: still, very cool 05:22 < sipa> Note that #1058 completely changes ecmult_gen, as well as how blinding works in general. 05:22 < andytoshi> ah right 05:22 < andytoshi> i actually don't remember how this affects blinding, although i did read the paper and review dettman's code way back when he proposed this.. 05:23 < andytoshi> oh i see, you added the blinding stuff to #693, which is the one i reviewed :) 05:23 < sipa> it's completely different in 1058 05:23 < sipa> it uses a final offset rather than an initial offset 05:24 < sipa> and projective blinding of the first table lookup 05:24 < sipa> rather than of the initial point 05:24 < andytoshi> ok nice. i will review this code then :) at least the blinding stuff 05:25 < sipa> #881 is still compatible with it 05:27 < andytoshi> good to know. my other questions would be -- is there still a notion of explicit re-randomization? would it still be expensive? is there still a "one-bit" analogue that could be done cheaply after signing operations? 05:27 < andytoshi> but i should read the code to understand these things 05:28 < sipa> yes, those are also still possible 05:28 < sipa> differently, but still possible 05:28 < sipa> but of course need mutable contexts etc 06:11 < andytoshi> nice, read through all the blinding-related parts of #1058, and skimmed the rest. spent some time squinting at the "don't extract single bits" logic, which was pretty cool though i wasn't able to find the specific part of the RSA paper you linked that covered this 06:12 < andytoshi> impressive that you were able to pull the most significant scalar offset of the multi-comb algo into the blinding scalar offset ... so i guess this form of blinding is literally free 06:13 < sipa> well, no, it comes at the cost of a final point addition at the end 06:13 < andytoshi> ah yes, ofc 06:13 < sipa> so not doing blinding would be one group op cheaper 06:14 < andytoshi> are you sure? i think you would need to do this group op anyway, to complete the multicomb algorithm? 06:21 < sipa> no 06:21 < sipa> i mean, yes i'm sure 07:48 < sipa> the old multicomb algorithm (693) did need a constant offset on the initial point in some cases, but the new one does everything by modifying the scalar 07:49 < sipa> so it doesn't need an initial or final point addition 07:49 < sipa> but 1058 still has a final point addition to be able to do additive blinding (of the scalar) 07:56 < andytoshi> ah, i gotcha, thanks 08:02 < BlueMatt[m]> thanks andytoshi and Pieter! To build on this somewhat - I don't quite understand how big the difference is between "blind once on context creation" and "blind after each signature" - the first being a trivial change (at least in rust) and the second being a nontrivial amount of API change in both languages in that it breaks context access across different threads. 08:03 < BlueMatt[m]> I asked Pieter and got a bit of an answer, but obviously its substantially complicated. 08:08 < andytoshi> yeah, it's hard for me to give a succinct summary ... my intuition is roughly that, in the case that our other sidechannel resistance fails so that this is even measurable, : 08:09 < andytoshi> 1. rerandomizing once means that an attacker, in theory, could extract some bits, but these would be blinded by something random that the attacker wouldn't know, and wouldn't be correlatable across different devices/sessions 08:09 < andytoshi> 2. if you fully rerandomize after every ecmult_gen, any timing attack is basically completely btfo 08:09 < sipa> The difficulty in answering this is that you need a bound on how powerful the attacker is. If they can read every bit of memory, no amount of blinding can help at all. If they can't read anything, obviously blinding is pointless. 08:10 < andytoshi> 3. if you do a "one-bit rerandomization" after every ecmult_gen, i think the story is basically the same as for full rerandomization, but it's harder to argue 08:10 < andytoshi> right -- my "hypothetical attacker" here is able to somehow extract blinded bits but can't see the blinding factor directly 08:10 < andytoshi> which is an arbitrary and probably-unrealistic attacker 08:11 < sipa> I think more practical attacker models can do things like see the weight of certain intermediary values (how many bits are set in certain words used in certain operations) 08:15 < andytoshi> one thought is that i think we should expose a "one bit randomize" method to the context object, and benchmark it alongside full rerandomization 08:16 < andytoshi> and if it turns out to be practically instant next to a signing operation...i am tempted in rust-secp to try to mutex it 08:17 < andytoshi> though i guess i would actually want to use a RwLock, which allows multiple readers and only gives exclusive access during writing .. which i understand is significantly more expensive than a normal mutex, even for readers 08:17 < sipa> one bit randomize is going to be very fast compared to full signing 08:19 < andytoshi> yeah. maybe no real point in benchmarking 08:20 < sipa> i mean... it could be 1 percent i guess 08:20 < andytoshi> ok -- at the very least i'd like to add a sign_then_rerandomize() method to rust-secp. i guess i could implement that now with full rerandomization, and then later after #1058 and some followups, could improve it to one-bit it 08:21 < andytoshi> sipa: my thinking here is, if somebody tries to write a massively parallel signer, and then i jam mutexes into the context object for rerandomization purposes, how badly will they be screwed? 08:21 < sipa> if you want to do that, create a separate context per thread 08:21 < andytoshi> yeah i think you're right 08:22 < andytoshi> so from the persective of rust-secp's API, i should just demand a non-const pointer to the context and let the caller figure out how to do that 08:22 < sipa> i mean... if the secp256k1 sign function itself is going to require a mutable context, then nearly all computationally heavy parts will need to run in the mutex 08:22 < sipa> and you'll barely get parallellism over 1 08:23 < andytoshi> right, a followup question would be "could you extract the blinding data from the context onto the local stack, and then no longer need that data to be locked" 08:23 -!- stickies-v [~stickies-@host-92-8-99-163.as13285.net] has quit [Ping timeout: 240 seconds] 08:23 < andytoshi> but even if it were possible in principle i see no way to make a C89 API that'd enable it 08:23 < sipa> i don't see how that question is relevant 08:24 -!- stickies-v [~stickies-@host-92-8-99-163.as13285.net] has joined #secp256k1 08:24 < sipa> oh, i see what you mean 08:24 < andytoshi> imagine that signing was "lock the blinding data; copy it onto the stack; unlock the data; **do expensive signing operations**; lock the blinding data again, rerandomize it, unlock it" 08:24 < andytoshi> yeah 08:25 < sipa> at that point you should just have separate sign and fast_rerandomize functions, and call one with lock and the other without 08:25 < sipa> or rather, with rwlocks i guess 08:25 < andytoshi> yeah. i agree 08:25 < sipa> but having separate contexts per thread is going to be way better 08:25 < sipa> especially now that contexts are so small 08:25 < andytoshi> so: from a C89 API point-of-view, i think we should expose a fast_rerandomize function (i can try to PR this if you want, though we should wait til after 1058) 08:26 < andytoshi> then in rust-secp we'll expose fast_rerandomize, and also provide a convenience function that signs then fast-rerandomizes. both will need exclusive context access 08:26 < sipa> instead of just having separate contexts 08:26 < sipa> i'm not sure... that may encourage people to do exactly what i described above 08:26 < andytoshi> and we'll have a bikesheddy discussion about whether this convenience wrapper should be called _sign() or _sign_then_rerandomize() or whatever 08:27 < andytoshi> sipa: ah, yeah, good point 08:27 < sipa> 1) 1058 08:27 < sipa> so i'd say 08:27 < sipa> 2) projective blinding per-signing (which is super cheap, and needs no API changes) 08:28 < andytoshi> i think maybe the best idea is to leave sign() alone, add sign_then_rerandomize(), and add a giant docccomment to sign() suggesting that the user either (a) make a ton of context objects for massively parallel ops; or (b) use sign_then_rerandomize ... depending what they are doing 08:28 < sipa> 3. add a sign_and_fast_rerandomize function that does both at once but needs a mutable context 08:28 < sipa> (alternatively, 3 could be done using a context flag) 08:28 < andytoshi> the context flag kinda scares me because it can lead to UB if it's used wrong 08:29 < sipa> i mean, sure 08:29 < andytoshi> and this is not really detectable 08:29 < sipa> but so does calling sign_then_rerandomize from multiple threads at once 08:29 < andytoshi> with ARG_CHECKs or anything 08:29 < sipa> without locking 08:29 < andytoshi> sipa: true! but in rust-secp i can stop users from doing that statically, using the rust type system 08:29 < andytoshi> so for me at least, this is much better than the context flag 08:30 < andytoshi> though i guess we also have tied the context flags into the type system 08:30 < andytoshi> so maybe it's the same 08:30 < sipa> the only difference i think is just in API proliferation 08:30 < andytoshi> yeah 08:30 < sipa> because we have 4 (?) sign functions already, i'd hate to have 8 08:31 < andytoshi> eek yeah 08:31 < andytoshi> though i would imagine that we wouldn't add a rerandomize variant to schnorrsig_sign_custom, say ... if you are using the custom thing you can do your own rerandomization 08:32 < sipa> i'd say ideally, all sign functions are changed to take a mutable context... but then unless you set this "private context" flag or whatever, it won't actually modify it 08:32 < sipa> and if you do, you can just change that flag, and get cheap rerandomization for free 08:32 < sipa> but this is an API break of course 08:33 < andytoshi> yeah for sure 08:33 < andytoshi> so, i mildly dislike this because i am going to need to assure the rust compiler "yes i am turning a constant pointer into a mutable one, but i have promises from the C code authors that this is ok" which i worry could burn me in the future if things ever change upstream 08:34 < andytoshi> whereas if the actual constness was reflected in the C API, i'd be much less likely to get burned 08:34 < andytoshi> OTOH this would mean doubling up all the C API functions, as you say 08:34 < sipa> that's fair 08:34 < andytoshi> which also sucks 08:34 < sipa> it doesn't need a cast though... you can have a const object that stores a mutable C pointer 08:34 < sipa> i mean, a pointer to a mutable C struct 08:34 < andytoshi> (i also think i need to double-up all the rust-secp functions, no matter what, because you can't abstract over mutability very well in rust, but c'est la vie) 08:35 < andytoshi> yeah that's true sipa 08:35 < sipa> but just in terms of API compatibility... this is really invasive and I'm not sure we want to burden all of libsecp256k1' users with that, even though we don't actually have guaranteed compatibility yet 08:36 < andytoshi> and then i'd just have "ordinary" scary keeping-C-mutability-in-sync-with-rust-mutability problems :) 08:36 < sipa> another question is how useful 1-bit-rerand even is 08:36 < andytoshi> i guess, what's frustrating is that i think *most* users actually would have exclusive access to the context object, or be able to easily geti t 08:36 < sipa> maybe the advice should just be to rerandomize completely for every signature, or whenever feasible 08:37 < andytoshi> sipa: right ... i'm not totally convinced that just doubling or adding 1 to the blinder helps that much. *maybe* adding a random bit could help, though that's then hard to do cheaply. maybe we could also multiply by the cube root of 1 lol 08:37 < andytoshi> and get a order-3 rerandomization 08:38 < sipa> another possibility is this: 08:38 < andytoshi> maybe we could add the secret nonce to the blinder, then subtract the public nonce from the blinder point 08:38 < andytoshi> though that is potentially making things more dangerous by leaking the nonce i guess :) 08:38 < sipa> have in the context 2 precomputed scalars a b , and their corresponding points A B 08:39 < sipa> and then have 1-bit rerand do {blind += (bit ? a : b); blind_point -= (bit ? A : B); blind /= 2; blind_point *= 2;} 08:39 < andytoshi> ..then on each rerandomization, randomly add/subtract B, and double B for next time? 08:39 < andytoshi> hehe :) 08:39 < andytoshi> i like it 08:39 < sipa> oh, add or subtract, and then double 08:39 < sipa> that works too 08:40 < andytoshi> ah my thing is not exactly yours .. but essentially the same 08:40 < sipa> that at least unpredictably changes every bit of the blinding factor (though the overall transform still only has 1 bit of entropy of course) 08:40 < andytoshi> so, i *think* that would help because it would change half the bits of the randomizing scalar usually 08:40 < andytoshi> yeah 08:40 < andytoshi> but again it's hard to model the attacker 08:41 < andytoshi> e.g. if an attacker was able to extract A and/or B this would become useless 08:42 < sipa> indeed 08:42 < sipa> well maybe not 08:43 < andytoshi> but assuming not ... with this scheme, after 250 signings we have effectively done a full rerandomization. vs with either the "add 1" scheme we'd have done only 8 bits of rerandomization, and with the "double" scheme it's harder to quantify but not great either, i think 08:43 < sipa> A might as well be a constant 08:43 < andytoshi> yeah agreed 08:44 < andytoshi> well, initially constant 08:44 < sipa> i mean literally a compile-time constant 08:46 < andytoshi> oh! i misread you, in my head A was blind_point 08:46 < andytoshi> and B was a point that we'd add/subtract, then double for next time 08:47 < andytoshi> but i think you have the opposite mapping 08:47 < sipa> Sorry, I have changed my proposal while talking and I think I caused some confusion. 08:47 < andytoshi> :) 09:03 < sipa> I think this works: rerand(blind, bit) { if (bit) negate(blind); blind += A; double(blind); } 09:03 < sipa> where A could be a runtime or compile-time constant 09:11 < andytoshi> i'm hesitant to make A a compile-time constant, i think it should be created uniformly on context creation (or on full_rerandomize) 09:11 < andytoshi> because otherwise everyone will be using the same blinding factor for their first few signatures 09:12 < sipa> well you have to randomize on context creation! 09:12 < sipa> so blind will be uniformly random 09:12 < andytoshi> ah! 09:12 < andytoshi> yes, i think this is good 09:13 < andytoshi> and yes, agreed that if the initial value of `blind` is random, A can be a publicly known constant 09:13 < sipa> A being random is better of course, so why not make it random 09:14 < sipa> but even if it isn't, i think the scheme has value 09:14 < andytoshi> agreed on both counts 09:15 < sipa> Pretty sure this has cycle length 2^255 at least. 09:15 < sipa> Or close to it. 09:15 < andytoshi> even if `blind` starts as 0, and if `A` is a known constant, i think this has some value as well 09:16 < andytoshi> it will make correlations between different signing sessions' sidechannel data very hard to detect, and will inject a bit of randomness into each one so that you won't be able to use very many to search or correlations across 09:17 < andytoshi> sipa: doubling has cycle length 2^256-ish 09:17 < andytoshi> which i think implies that the cycle length for "A -> A +/- B, B -> 2B" is the same 09:17 < andytoshi> actually maybe even longer 09:18 < andytoshi> (you can check in sage, N(2).order() is the same as the full scalar group order) 09:18 < sipa> First of all, we can divide the whole thing by A, which results in an equivalent scheme with A=1. 09:19 < andytoshi> mathematically equivalent, yes 09:20 < andytoshi> but if A = 1, then with high probability you are only affecting the lowest bits of `blind` 09:21 < andytoshi> so from a sidechannel protection point of view, i think you are accomplishing less 09:21 < andytoshi> but it's hard for me to make this formal. i might be wrong 09:22 < sipa> oh, sure, i mean the divide by A is just to compute cycle length 09:22 < andytoshi> ah! yes, that's a clever argument 09:22 < sipa> don't actually do that 18:37 -!- calvinalvin [~kcalvinal@ec2-52-79-199-97.ap-northeast-2.compute.amazonaws.com] has quit [Quit: ZNC 1.7.4 - https://znc.in] 18:38 -!- kcalvinalvin [~kcalvinal@ec2-52-79-199-97.ap-northeast-2.compute.amazonaws.com] has joined #secp256k1 20:06 < roconnor> BTW, what's the status of ammending BIP-340 to support var length messages? 22:07 -!- Netsplit *.net <-> *.split quits: roconnor 22:12 -!- roconnor [~roconnor@coq/roconnor] has joined #secp256k1 22:59 -!- halosghost [~halosghos@user/halosghost] has quit [Ping timeout: 240 seconds] --- Log closed Fri Jan 28 00:00:42 2022