--- Log opened Mon May 27 00:00:25 2019 00:02 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 01:25 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has joined #c-lightning 01:49 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has joined #c-lightning 03:04 -!- berndj [~berndj@azna.co.za] has quit [Ping timeout: 246 seconds] 03:11 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 03:19 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 04:02 -!- booyah [~bb@193.25.1.157] has quit [Ping timeout: 252 seconds] 04:52 -!- booyah [~bb@193.25.1.157] has joined #c-lightning 05:20 -!- berndj [~berndj@azna.co.za] has joined #c-lightning 05:24 < m-schmoock> meh, trying to isolate a routing problem. when I isolate it, it does not happen. I know its an issue that happens when using the channels a lot with circular payments, eventually the getroute refuses to find a rout that exist. I know that, because I can still use the excat same path by using rebalance (which adds the last hop manually) 05:27 < m-schmoock> grmp. will do anouther hacky workaround by trying exactly that manual last hop on route failures 05:27 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 05:35 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Remote host closed the connection] 06:43 -!- mdunnio [~mdunnio@208.59.170.5] has joined #c-lightning 07:13 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…] 07:59 -!- ulrichard [~richi@dzcpe6300borminfo01-e0.static-hfc.datazug.ch] has quit [Remote host closed the connection] 08:48 -!- mdunnio [~mdunnio@208.59.170.5] has joined #c-lightning 09:00 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…] 09:07 -!- jb55 [~jb55@S010660e327dca171.vc.shawcable.net] has joined #c-lightning 09:08 -!- bitonic-cjp [~bitonic-c@92-111-70-106.static.v4.ziggozakelijk.nl] has quit [Quit: Leaving] 09:43 -!- ctrlbreak [~ctrlbreak@142.162.20.53] has quit [Ping timeout: 252 seconds] 09:58 -!- ctrlbreak [~ctrlbreak@142.167.242.218] has joined #c-lightning 10:01 -!- k3tan [k3tan@gateway/vpn/protonvpn/k3tan] has quit [Ping timeout: 245 seconds] 10:03 -!- ctrlbreak [~ctrlbreak@142.167.242.218] has quit [Ping timeout: 258 seconds] 10:32 -!- darosior [6dbe8dc1@gateway/web/freenode/ip.109.190.141.193] has quit [Quit: Page closed] 10:33 -!- k3tan [k3tan@gateway/vpn/protonvpn/k3tan] has joined #c-lightning 10:57 -!- ctrlbreak [~ctrlbreak@156.34.88.119] has joined #c-lightning 11:32 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has joined #c-lightning 11:46 -!- jtimon [~quassel@181.61.134.37.dynamic.jazztel.es] has joined #c-lightning 12:00 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 12:13 -!- mdunnio [~mdunnio@208.59.170.5] has joined #c-lightning 12:14 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 12:16 -!- takinbo [sid19838@gateway/web/irccloud.com/x-kmjdifsocslcvitq] has quit [Ping timeout: 252 seconds] 12:22 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…] 12:29 -!- booyah [~bb@193.25.1.157] has quit [Ping timeout: 272 seconds] 12:35 < roasbeef> t0mix: it'll be watching for spends from the channel output, once it sees one hit the chain, then it knows the channel has been closed 12:35 < roasbeef> commitment = the channel you can broadcast to unilaterally close the channel w/o interaction from teh other party 12:41 < jb55> I'm starting to build a replicated db plugin + encrypted backup service. perhaps this could be generalized somehow but for now it's just going to be encrypted clightning sqlite statements. perhaps these could be converted to some standard format eventually? 12:44 < roasbeef> encrypted statements, as in queries? 12:45 < jb55> roasbeef: yeah 12:45 < roasbeef> why queries vs a serialzied state decoupled from sqlite? 12:46 < jb55> roasbeef: yeah ideally it would be decoupled, queries was just easier for a prototype 12:46 < roasbeef> but what do queries do for you w/o the state? why wdo the queries need to be encrypted? 12:48 < jb55> roasbeef: I just wanted a way to replay the logs to rebuild the db state. everything is ecrypted for privacy, the idea is that a third party could be providing the storage. 12:49 < roasbeef> so you want a db update log? seems distinct from queries, or by query are you including things like INSERT and UPDATE statements? 12:49 < jb55> roasbeef: sorry yeah just update queries, not selects 12:49 < jb55> insert/update 12:49 < roasbeef> gotcha 12:50 < jb55> roasbeef: but you're right, I probably don't need to replay everything if it's just for channel state stuff 12:50 < jb55> I might as well have some generic format from the start. just haven't evaluated if that would be bad wrt. our other db state... 12:51 < roasbeef> yeh dunno, implementation details would likely depend on the type of consistent hooks you can insert into the current sqlite execution logic 12:52 < jb55> roasbeef: rusty just added low level update/insert hooks so I figure I would just store every one of those for now. but perhaps I could just parse and convert a subset before I send it for replication, and then convert it back when restoring 12:52 < jb55> we'll see 12:54 < roasbeef> gotcha, yeh not sure about the details of that, but the thing i'd be concerned with is the level of consistency of those hooks: are they fully blocking and can failed replications be rolled back? 12:54 < roasbeef> but ofc i have no idea what i'm talking about when it comes to c-lightning ;) 12:55 < jb55> roasbeef: yeah I was thinking the same, I'm sure replication logic isn't trivial. unfortunately sqlite doesn't have master-slave replication so we're forced to roll our own. 12:56 < jb55> roasbeef: I don't think rollbacks are that much of a concern since the db basically holds a giant lock around each stmt? 12:56 < jb55> it discourages concurrent access 12:57 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 12:58 < jb55> I'm probably simplifying things but yeah dealing with crahes and stuff won't be fun 12:59 < roasbeef> yeh it's def involved to make sure its very robust 13:00 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 13:01 <@cdecker> jb55: I actually have tried taking a snapshot of the DB and then replaying the updates and I got the same DB back ^^ 13:01 < rusty> Yeah, hook always gets called before we actually commit to db. If we crash afterwards, you'll be 1 commit ahead, but that's OK *by definition*. 13:02 <@cdecker> Yep, one thing we could do is track an internal update number and bump it on every non-read-only DB commit 13:02 <@cdecker> That way we can trail the replica by 1 commit and apply it conditionally on whether we are in sync or the last needs to be rolled back 13:03 < jb55> cdecker: yeah I really like the monotonically-increasing-number interfaces where you never miss anything 13:03 < rusty> cdecker: sqlite basically does this, I think. Let me dig out the docs... 13:03 < jb55> rusty: there's WAL 13:03 < jb55> I don't know if we could use that 13:03 < jb55> I've been reading up on it 13:03 <@cdecker> rusty: sqlite3_changes tracks number of changes since the file was opened, but not across reopens 13:05 < rusty> Was looking at SQLITE_FCNTL_DATA_VERSION, but it's not clear if it's total... 13:11 < jb55> the replication plugin would need to keep a log around just incase it fails to commit to the remote. I'm guessing the plugin itself would have to handle that state? then again how would the plugin know what state index it's at incase it missed a db hook somehow? 13:34 -!- ctrlbreak [~ctrlbreak@156.34.88.119] has quit [Ping timeout: 246 seconds] 13:36 <@cdecker> Well, if we add a DB version counter to the `vars` table and increment it on each commit that changes the DB then we know how to handle 13:36 <@cdecker> If the plugin store is < DB_version - 1 then we've fallen behind and need a new snapshot (only possible if the node was started without the plugin fwiw) 13:37 <@cdecker> If the plugin store is == DB_version, then the last commit wasn't successful and needs rollback 13:37 < rusty> cdecker: does the db cmd which updates the counter get sent to the plugin too? I guess so? 13:37 < rusty> cdecker: you never need to rollback, BTW. 13:38 <@cdecker> Yes I would add the increment to the db hook as well, then the plugin can fish it out 13:39 <@cdecker> (and add it as a separate entry in the json just for easier parsing) 13:40 < jb55> so as long as the plugin gets loaded and nothing goes wrong in theory the plugin should get every update. but if the plugin crashes and we lose sync I guess we would have to do a full backup again and start over. 13:48 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 252 seconds] 13:49 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 13:51 -!- ctrlbreak [~ctrlbreak@156.34.88.119] has joined #c-lightning 13:54 -!- booyah [~bb@193.25.1.157] has joined #c-lightning 14:02 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 256 seconds] 14:04 -!- ctrlbreak_MAD [~ctrlbreak@156.34.88.119] has joined #c-lightning 14:05 -!- ctrlbreak [~ctrlbreak@156.34.88.119] has quit [Ping timeout: 246 seconds] 14:19 < jb55> another thing I still need to evaluate is what is the minimal set of data I actually need for these backups. I'm tempted to say all for now but a minimal channel state backup would be nice if possible without bugging everything out 14:20 < jb55> but there are things like the blocks table which seem really big and I probably don't need/ 14:21 < rusty> jb55: yeah, I've proposed we move it out to another db or file for exactly that reason. OTOH, it's not really that much bandwidth, given 10 minute blocks. 14:22 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 272 seconds] 14:26 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Ping timeout: 245 seconds] 14:30 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #c-lightning 14:52 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has joined #c-lightning 14:57 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has quit [Ping timeout: 256 seconds] 15:07 <@cdecker> So the utxoset can definitely be dropped/ignored that's likely the biggest table in the DB 15:08 <@cdecker> I wouldn't drop the blocks table since loads of stuff indexes into it with foreign keys (confirmation_heights and spend_heights) 15:09 -!- booyah_ [~bb@193.25.1.157] has joined #c-lightning 15:09 -!- booyah [~bb@193.25.1.157] has quit [Read error: Connection reset by peer] 15:15 -!- darosior [52ff9820@gateway/web/freenode/ip.82.255.152.32] has quit [Quit: Page closed] 15:35 -!- jtimon [~quassel@181.61.134.37.dynamic.jazztel.es] has quit [Quit: gone] 15:48 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 16:25 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 16:44 -!- EagleTM [~EagleTM@unaffiliated/eagletm] has quit [Ping timeout: 248 seconds] 17:11 -!- mdunnio [~mdunnio@208.59.170.5] has joined #c-lightning 17:39 -!- StopAndDecrypt [~StopAndDe@unaffiliated/stopanddecrypt] has quit [Ping timeout: 245 seconds] 17:57 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning 18:07 -!- spinza [~spin@155.93.246.187] has quit [Quit: Coyote finally caught up with me...] 18:13 -!- spinza [~spin@155.93.246.187] has joined #c-lightning 18:13 -!- bitdex [~bitdex@gateway/tor-sasl/bitdex] has joined #c-lightning 18:16 -!- rafalcpp_ [~racalcppp@84-10-11-234.static.chello.pl] has joined #c-lightning 18:16 -!- rafalcpp [~racalcppp@84-10-11-234.static.chello.pl] has quit [Ping timeout: 258 seconds] 18:17 -!- queip [~queip@unaffiliated/rezurus] has quit [Ping timeout: 272 seconds] 18:23 -!- queip [~queip@unaffiliated/rezurus] has joined #c-lightning 19:53 -!- grubles [~grubles@unaffiliated/grubles] has quit [Quit: leaving] 20:11 -!- mdunnio [~mdunnio@208.59.170.5] has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…] 21:53 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has quit [Quit: Leaving.] 23:54 -!- rusty [~rusty@pdpc/supporter/bronze/rusty] has joined #c-lightning --- Log closed Tue May 28 00:00:25 2019