--- Log opened Fri Aug 28 00:00:01 2020 00:24 -!- hodlwave [~znc-admin@68.183.145.179] has quit [Quit: ZNC 1.7.5 - https://znc.in] 00:28 -!- hodlwave [~znc-admin@68.183.145.179] has joined #rust-bitcoin 02:07 -!- jonatack [~jon@37.164.15.34] has quit [Read error: Connection reset by peer] 02:39 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #rust-bitcoin 03:20 -!- Dagmar14Keebler [~Dagmar14K@static.57.1.216.95.clients.your-server.de] has joined #rust-bitcoin 03:23 < elichai2> BlueMatt: what's up with the `fuzztarget` in rust-secp? is it usable? do you use it anywhere? 03:34 -!- midnight [~midnight@unaffiliated/midnightmagic] has quit [Ping timeout: 244 seconds] 03:37 -!- midnight [~midnight@unaffiliated/midnightmagic] has joined #rust-bitcoin 04:19 -!- Dagmar14Keebler [~Dagmar14K@static.57.1.216.95.clients.your-server.de] has quit [Ping timeout: 256 seconds] 04:53 -!- CjS77 [~caylemeis@195.159.29.126] has joined #rust-bitcoin 06:53 -!- shesek [~shesek@164.90.217.137] has joined #rust-bitcoin 06:53 -!- shesek [~shesek@164.90.217.137] has quit [Changing host] 06:53 -!- shesek [~shesek@unaffiliated/shesek] has joined #rust-bitcoin 07:08 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 07:08 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #rust-bitcoin 08:01 < elichai2> argh I coordinated 3 PRs and forgot to check travis and the no-std tests :/ 08:24 < BlueMatt> elichai2: yes, rust-lightning uses and relies on fuzztarget in rust-secp and bitcoin_hashes extensively 08:24 < BlueMatt> elichai2: but I was fine with that pr that made you set a global static before each run 08:25 < elichai2> I'm asking because it looks like gibberish to me lol 08:25 < elichai2> and none of the tests pass under this, so I wonder how is it useful 08:25 < BlueMatt> ariard: client_ is better than node_, but I do think in our specific case we may want to reserve that for rust-lightning's user as the "client". 08:25 < BlueMatt> ariard: thanks for the pr. I'll take a look in a bit. gotta have coffee first 08:27 < BlueMatt> ariard: as for 0-conf, I believe it will require some kind of "negotiate random scid with peer before confirmation", so there will have to be *some* short_channel_id pointing to the forwarding channel 08:27 < BlueMatt> ariard: I thought I'd mentioned that in my intiail email maybe I need to re-mention it, but this likely also applies to the privacy stuff someone was proposing at some point :shrug: 08:30 < BlueMatt> ariard: I think the only reason to fail quick there is to avoid sitting on the htlc with it open in our channel for no reason - that ties up resources in that channel that we could use for forwarding a payment that may pay us a fee. ignoring privacy issues, we want to fail-fast always....of course there's network-level privacy to doing relay in discrete timesteps, and we have to balance that, but I'm not sure how that applies to failures. 08:31 < BlueMatt> ariard: as for is_live() - it needs a fixing anyway, I believe thats #661, so I wouldnt worry about getting it right here. 09:40 -!- yancy [~root@li1543-67.members.linode.com] has quit [Ping timeout: 256 seconds] 10:02 -!- neonknight648 [~neonknigh@195.159.29.126] has joined #rust-bitcoin 10:03 -!- neonknight64 [~neonknigh@195.159.29.126] has quit [Read error: Connection reset by peer] 10:03 -!- neonknight648 is now known as neonknight64 10:12 < ariard> BlueMatt: with that regard, client_ makes sense as it's most of the time it will be data either configured by RL's user or effectively belonging to him 10:12 < ariard> BlueMatt: are you good enough with client_ ? just to finish with #633 10:14 < ariard> BlueMatt: for 0-conf, yes you need a 3-party protocol, 1) relay - payee alias negotiation 2) payee - payer alias+invoice exchange 3) payer - relay HTLC sending 10:15 < ariard> BlueMatt: that's right you have a congestion issue for us, at the same time you may have room left in incoming channel but may want to delay forwarding because outgoing channel is congested 10:16 < ariard> BlueMatt: that's a bit of a tradeoff, we decongestion faster but loose in flexibility for relay/congestion custom features 10:17 < ariard> BlueMatt: I can reduce #670 to name fixing, but I bet in few months from now we'll move relay checks in process_pending_htlcs or around :) 10:17 < ariard> reviewing the enum dispatch noise rn, will update #633 just after 10:18 < BlueMatt> ariard: I'm not sure what features you have in mind? 10:19 < ariard> BlueMatt: generate-outgoing-channel-at-packet reception, delayed forwarding, stricter relay policy 10:19 < BlueMatt> I mean I'm thinking about the payer experience - the quicker we can fail a payment, the faster they can try a different route 10:20 < ariard> But maybe the best payer experience is to generate an outgoing channel on the flight 10:20 < ariard> and so not fail the ongoing route 10:20 < BlueMatt> I dont think its acceptable to sit on an htlc waiting to forward it until we can decide to do something...if its just a rebalance issue, maybe, but even then we should decide to do it right away, not later. 10:21 < BlueMatt> right, ok, so even if eg the destination is "our client" (ie we want to open a new 0-conf channel to them because they'll trust us), we should do that asap, not after a forwarding delay. 10:21 < ariard> but you may receive the HTLC, decide which utxo to lockup for the outgoing channel, announce the channel to the payee, then relay the HTLC 10:22 < BlueMatt> eg in that case we should detect it upon receipt and generate a different event asap 10:22 < ariard> the delay is just process_pending_htlcs_forward timer? 10:22 < ariard> see your point, we may lack an event there 10:22 < BlueMatt> rather than generating a forwardable event, waiting for a delay, forwarding, generating a "please create forwarding channel" event, then doing it again 10:23 < ariard> I see we verify if we have a callback, by default we apply relay forwarding policy 10:23 < BlueMatt> right, that timer...even better, its better for privacy to do it asap - if we can complete the new 0-conf channel creation inside the delay window, it looks the same to an outside observer (because the point of the dleay window is to make things happen in discrete steps, at least for large relaying nodes) 10:23 < ariard> if yes we continue, if fail, we fail immediately 10:25 < ariard> right but setting a 0-conf channel might be lengthy due to round-trip with outgoing hop 10:25 < ariard> and you don't want this latency hit by default to apply to any forwarded HTLC 10:25 < BlueMatt> sure, and in that case we take a hit, but then even better to do it asap vs waiting on the timre 10:25 < BlueMatt> timer 10:26 < ariard> I agree it's better to do it asap, if we have a callback someone can still apply something custom 10:26 < BlueMatt> I'm confused, are you still suggesting that we should do it later vs detecting before we queue up the Forwardable event? 10:28 < BlueMatt> (and I dont see a reason to use a callback, just generate an event if configured to - callbacks introduce reentrancy bugs, events do not) 10:28 < ariard> we should a) decode onion b) valid HTLC against incoming channel settings c) valid HTLC against default relay settings + custom relay 10:29 < BlueMatt> sure, and then d) decide based on the results of c) to enqueu a "you need to fix shit" event or a "pending htlcs relayable on timer" event 10:29 < ariard> right, we generate a PendingeHTLCsRelayable 10:30 -!- belcher_ [~belcher@unaffiliated/belcher] has joined #rust-bitcoin 10:31 < ariard> you call process_pending_htlc_relayable() or custom_process_pending_htlc_relayable() (or both) and generate a PendingHTLCsForwardable 10:31 < BlueMatt> huh? the user calls that 10:31 < ariard> yes the user that's the top API 10:31 < BlueMatt> please no callbacks unless we have to - they introduce reentrancy bugs 10:31 < BlueMatt> right, ok 10:32 < ariard> yes no callback promise 10:32 < BlueMatt> some kind of morph_pending_htlc_into_something_relayable(htlc, new_scid) which then generates a PendingHTLCsForwardable, which then we call process_pending_htlc_relayable() 10:32 < BlueMatt> that way the relay still happens on a timer 10:33 < ariard> right and you can't dissociate if it fail due to relay or forward channel congestion/settings 10:33 < elichai2> Running Msan & Asan on rust-secp(with full libstd rebuild) :) https://travis-ci.org/github/rust-bitcoin/rust-secp256k1/jobs/722094792#L1564 10:33 -!- belcher [~belcher@unaffiliated/belcher] has quit [Ping timeout: 240 seconds] 10:33 < ariard> BlueMatt: #633, are you okay with client_ prefix ? 10:34 < BlueMatt> ariard: I really prefer our_ or local_, tbh 10:34 < BlueMatt> what's your concern with our_ and local_, again? 10:35 < ariard> well local_ is leaking in bolt3 spec, so someone will be confused when calling scripts helpers if you must pass local_contested_delay/counterparty_contest_delay 10:36 < BlueMatt> hmm, yea, I mean ideally we'd have self-explanitory things so that our users dont have to wade through the mess that is the bolts 10:36 < BlueMatt> but i see your point 10:36 < BlueMatt> what about our_? 10:37 < ariard> Not fan of pronoun but don't have a strong arg, I think Jeff gave a good reason to dismiss them 10:37 < ariard> let me check on PR 10:39 < ariard> BlueMatt: see his point, I'm sharing this opinon https://github.com/rust-bitcoin/rust-lightning/pull/633#pullrequestreview-427431340 10:39 < ariard> client/counterparty sounds a good generic symmetry 10:54 < BlueMatt> ariard: local_party or just party_? 10:55 < BlueMatt> there really isnt an opposite for a counterparty :/ 11:50 -!- belcher_ [~belcher@unaffiliated/belcher] has quit [Quit: Leaving] 12:30 < ariard> BlueMatt: hmmmm using party isn't great compare to counterparty, too much collision near 12:31 < ariard> BlueMatt: if we use client_, what's you concrete collision concern ? We don't manipulate client variables, and config is clear enough to indicate it's the local config 12:31 < ariard> not the one of the counterparty 12:31 < ariard> we call it UserConfig 13:22 < ariard> BlueMatt: re #679, took your comments but I wonder if shouldn't a new ChannelMonitorUpdateErr along TemporaryFailure (e.g network outage) and PermanentFailure (e.g out-of-credit public watchtower, hardware corruption, etc) 13:22 < ariard> like UpdateFailure to encode the fact the watchtower is still functional, but just rejected your update and keep monitoring 13:23 < ariard> and that channel processing must be stop by channel manager at reception of UpdateFailure 13:38 < ariard> okay just add a new comment for now, I think we might still introduce more fine-grained errors types in the future 13:43 < ariard> BlueMatt: note for the future, we should have an authenticated control protocol between watchtower and coordinator to avoid forgery of update reject in case of watchtower site-compromise 13:43 < ariard> BlueMatt: a nice extension of the external signer 13:45 < BlueMatt> ariard: no, I dont think it needs a new enum? ChannelManager handles it the same and that enum is primarily for ChannelManager's purpose. 13:45 < BlueMatt> ariard: if you really want we can change what ChannelMonitor functions return. 13:46 < BlueMatt> but, in general, if ChannelMonitor returns an Err it MUST still be valid to scan the chain with, so I think maybe adding more docs to indicate this isn't that big a change, the change is that not only is it still valid but also *has* changed despite returning an Err, which is very strange. 13:54 < ariard> BlueMatt: yes I just finally add a clearer comment, lmk what you think about it 13:55 < ariard> BlueMatt: it's very strange because how many API for a distributed system with *two* masters, while one of them being another distributed system in itself do you know :p ? 13:58 < ariard> maybe some kind of weird self-healing network filesystem 14:02 < BlueMatt> ariard: eh, I wouldnt call it two masters - you still have a quorum requirement, but, like, for a bunch of distributed filesystems you could read without a quorum, depending on the design 14:50 < BlueMatt> yeesh ffs, rustc nightly links against llvm 11-git, so to run memory sanitizer with C code linked you have to hope your distro ships random-ass git versions of stuff 14:57 < BlueMatt> oh, great, and RwLock isn't memsan-safe, even with -Zbuild-std 15:04 < BlueMatt> huh, seemingly nor is Vec, ugh 18:20 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 240 seconds] 18:20 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #rust-bitcoin 21:57 -!- wallet42_ [sid154231@gateway/web/irccloud.com/x-hwhlmefxzyxrngtu] has quit [Ping timeout: 240 seconds] 21:58 -!- wallet42_ [sid154231@gateway/web/irccloud.com/x-wpjdpdpszehscbap] has joined #rust-bitcoin 22:01 -!- BlueMatt [~BlueMatt@unaffiliated/bluematt] has quit [Ping timeout: 240 seconds] 22:01 -!- sgeisler [sid356034@gateway/web/irccloud.com/x-xovlpgilwcahhtkw] has quit [Ping timeout: 240 seconds] 22:02 -!- sgeisler [sid356034@gateway/web/irccloud.com/x-vbvikcdqkjouvktk] has joined #rust-bitcoin 22:03 -!- fiatjaf [~fiatjaf@2804:7f2:2a84:b2d3:ea40:f2ff:fe85:d2dc] has quit [Ping timeout: 240 seconds] 22:03 -!- BlueMatt [~BlueMatt@unaffiliated/bluematt] has joined #rust-bitcoin 22:05 -!- fiatjaf [~fiatjaf@2804:7f2:2a84:b2d3:ea40:f2ff:fe85:d2dc] has joined #rust-bitcoin 22:53 -!- shesek [~shesek@unaffiliated/shesek] has quit [Remote host closed the connection] --- Log closed Sat Aug 29 00:00:02 2020