--- Log opened Fri Jul 02 00:00:54 2021 00:25 -!- rjected [~rjected@2001:470:69fc:105::949] has joined #utreexo 00:47 -!- rjected [~rjected@2001:470:69fc:105::949] has quit [Quit: Client limit exceeded: 20000] 01:00 -!- rjected [~rjected@2001:470:69fc:105::949] has joined #utreexo 01:21 -!- rjected [~rjected@2001:470:69fc:105::949] has quit [Quit: Client limit exceeded: 20000] 01:33 -!- rjected [~rjected@2001:470:69fc:105::949] has joined #utreexo 01:40 -!- rjected [~rjected@2001:470:69fc:105::949] has quit [Read error: Connection reset by peer] 02:06 -!- rjected [~rjected@2001:470:69fc:105::949] has joined #utreexo 02:29 < calvinalvin> In the current pollard, if a leaf is remembered, does it ever get "unremembered"? 02:30 < calvinalvin> I'm currently looking to https://github.com/mit-dci/utreexo/issues/264. Fixed the ingestbatchproof part but addOne is also contributing a bit to the memory usage 02:31 < calvinalvin> I vaguely recall talking about this in a call once (that we aren't unremembering leaves) 05:08 < dergoegge> It should get unremembered when it gets removed. We had a bug where parts of the proof where sticking around after removal. https://github.com/mit-dci/utreexo/pull/198 05:10 < dergoegge> this relies on the ttls being correct and not remembering more leaves than the ttls indicate 05:10 < dergoegge> but yeah a bug with this could easily explain higher memory usage 05:11 < dergoegge> calvinalvin i will have a look at the segwit serialization stuff now, not sure how that works 05:13 < dergoegge> looking at https://github.com/mit-dci/utreexo/issues/264 this might also come from this: https://github.com/mit-dci/utreexo/blob/3f55cb12f16dec7f9d8919b020a6f5d7bf636ced/accumulator/pollardproof.go#L36 05:14 < dergoegge> if one node in that slice is still in the pollard than the entire slice sticks around/does not get GCed 05:14 < dergoegge> i wrote that code, not sure why i didnt think of that 05:15 < dergoegge> it did remove GC pressure though :D 05:17 < dergoegge> quick way to fix this would be to make populateOne create new polNodes instead of taking them from the slice 05:17 < dergoegge> https://github.com/mit-dci/utreexo/blob/3f55cb12f16dec7f9d8919b020a6f5d7bf636ced/accumulator/pollardproof.go#L275:6 05:18 < dergoegge> then the GC pressure might show up again 05:19 < dergoegge> this is pretty much the reason for the nodepool class that we have in the C++ version. "Allocate a fixed number of nodes beforehand and reuse them" 14:42 -!- FelixWeis [sid154231@id-154231.stonehaven.irccloud.com] has quit [Read error: Connection reset by peer] 14:54 -!- FelixWeis [sid154231@id-154231.stonehaven.irccloud.com] has joined #utreexo 19:22 < calvinalvin> https://github.com/mit-dci/utreexo/blob/3f55cb12f16dec7f9d8919b020a6f5d7bf636ced/accumulator/pollardproof.go#L36 is the part that I fixed :) 19:23 < calvinalvin> https://github.com/mit-dci/utreexo/blob/3f55cb12f16dec7f9d8919b020a6f5d7bf636ced/accumulator/pollard.go#L149 this is the other memory issue 19:26 < calvinalvin> If I take out the remember in line 151, memory usage for pollard stays below 100MB. If it's there, it spikes to 2GBs (and more sometimes) when syncing testnet to ~1,300,000 19:26 < calvinalvin> So def something wrong there as well --- Log closed Sat Jul 03 00:00:55 2021