--- Log opened Wed Jun 18 00:00:16 2025 00:24 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 00:41 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 244 seconds] 01:09 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 01:17 -!- l0rinc [~l0rinc@user/l0rinc] has quit [Quit: l0rinc] 01:24 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 252 seconds] 01:29 -!- l0rinc [~l0rinc@user/l0rinc] has joined #bitcoin-core-pr-reviews 01:38 -!- l0rinc [~l0rinc@user/l0rinc] has quit [Ping timeout: 252 seconds] 01:41 -!- l0rinc [~l0rinc@user/l0rinc] has joined #bitcoin-core-pr-reviews 01:51 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 02:07 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 276 seconds] 02:11 -!- l0rinc [~l0rinc@user/l0rinc] has quit [Ping timeout: 276 seconds] 02:33 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 02:39 -!- l0rinc [~l0rinc@user/l0rinc] has joined #bitcoin-core-pr-reviews 02:40 -!- rkrux [~rkrux@user/rkrux] has joined #bitcoin-core-pr-reviews 02:46 -!- l0rinc [~l0rinc@user/l0rinc] has quit [Quit: l0rinc] 02:49 -!- rkrux [~rkrux@user/rkrux] has quit [Quit: Client closed] 02:49 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 244 seconds] 03:28 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 03:45 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 244 seconds] 03:53 -!- rkrux [~rkrux@user/rkrux] has joined #bitcoin-core-pr-reviews 03:58 -!- Zenton [~Zenton@user/zenton] has joined #bitcoin-core-pr-reviews 04:14 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 04:29 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 276 seconds] 04:50 -!- rkrux [~rkrux@user/rkrux] has quit [Quit: Client closed] 04:56 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 05:22 -!- rszarka [~szarka@2603:3003:4eac:100:850b:8664:1549:f92c] has quit [Quit: Leaving] 05:23 -!- szarka [~szarka@2603:3003:4eac:100:850b:8664:1549:f92c] has joined #bitcoin-core-pr-reviews 05:49 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 260 seconds] 05:52 -!- Holz [~Holz@user/Holz] has quit [Ping timeout: 244 seconds] 06:18 -!- Holz [~Holz@user/Holz] has joined #bitcoin-core-pr-reviews 06:31 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 06:47 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 252 seconds] 07:08 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 09:15 -!- abubakarsadiq [uid602234@id-602234.hampstead.irccloud.com] has quit [Quit: Connection closed for inactivity] 09:25 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Remote host closed the connection] 09:45 -!- kevkevin [~kevkevin@2607:fb90:9ba8:404a:591c:b25d:6fa3:cecf] has joined #bitcoin-core-pr-reviews 09:52 -!- kevkevin [~kevkevin@2607:fb90:9ba8:404a:591c:b25d:6fa3:cecf] has quit [Remote host closed the connection] 09:54 -!- Zenton [~Zenton@user/zenton] has quit [Ping timeout: 276 seconds] 09:57 -!- pseudoramdom [~pseudoram@user/pseudoramdom] has joined #bitcoin-core-pr-reviews 09:58 -!- monlovesmango [~monlovesm@79.127.136.108] has joined #bitcoin-core-pr-reviews 10:00 < glozow> wassupppp 10:00 < instagibbs> 👋 10:00 < glozow> #startmeeting 10:00 < corebot`> glozow: Meeting started at 2025-06-18T17:00+0000 10:00 < corebot`> glozow: Current chairs: glozow 10:00 < corebot`> glozow: Useful commands: #action #info #idea #link #topic #motion #vote #close #endmeeting 10:00 < corebot`> glozow: See also: https://hcoop-meetbot.readthedocs.io/en/stable/ 10:00 < corebot`> glozow: Participants should now identify themselves with '#here' or with an alias like '#here FirstLast' 10:00 < monlovesmango> heyy 10:00 < pseudoramdom> hi hi 10:00 -!- Emc99 [~Emc99@212.129.79.217] has joined #bitcoin-core-pr-reviews 10:01 < glozow> welcome to PR Review Club! we're looking at Improve TxOrphanage denial of service bounds today: https://bitcoincore.reviews/31829 10:01 < marcofleon> hola 10:01 < theStack> hi 10:02 < glozow> did anybody get a chance to review the PR or the notes? 10:02 < instagibbs> reviewing the PR 10:02 < marcofleon> yes, mostly notes and focused on new txorphanage 10:02 < monlovesmango> yes, mostly. haven't reviewed test commits 10:03 < glozow> instagibbs marcofleon monlovesmango: awesome! 10:03 < glozow> Why is the current TxOrphanage global maximum size limit of 100 transactions with random eviction problematic? Can you think of a concrete attack scenario that would affect 1-parent-1-child (1p1c) relay? 10:03 < pseudoramdom> Just read the notes as well 10:03 < marcofleon> i'm also sending the fuzz tests cmooon 10:04 < monlovesmango> bc it enables peers to evict other peer's orphans from your orphanage 10:04 < marcofleon> Attack scenario could be an attacker node spamming the orphanage to prevent a child paying for a low fee parent 10:05 < marcofleon> if the child keeps getting evicted, then i guess parent would be dropped from mempool? 10:06 < glozow> marcofleon: in a 1p1c scenario, the parent isn't in mempool yet. We opportunistically pair it with its child in orphanage if we find one. But if the child gets evicted, then we're out of luck 10:06 < glozow> Can you summarize the changes to the eviction algorithm at a high level? 10:06 < monlovesmango> if a malicious peer floods you with orphans, its pretty likely that you will evict a child that could have otherwise been resolved as 1p1c? 10:06 < pseudoramdom> +1. Low fee rate parent + CPFP scenario, attacker floods with orphan tx? 10:08 < marcofleon> well eviction is no longer random, it's based on the "worst behaving" peer, and its the oldest announcement 10:08 < marcofleon> highest Dos score peer will have their annoucement removed 10:08 < glozow> marcofleon: yep! and we'll get into how we calculate DoS score in a later question 10:08 < glozow> Why is it desirable to allow peers to exceed their individual limits while the global limits are not reached? 10:10 < marcofleon> Because there could be a peer that is sending a lot of orphans, not necessarily dishonestly 10:10 < instagibbs> in the non-adversarial case, it could allow a lot more "honest" CPFPs through 10:10 < pseudoramdom> Not all peers may be active actively broadcasting at the same time? 10:10 < marcofleon> just makes sense to not waste the space by having an inflexible limit per peer 10:10 < glozow> marcofleon: instagibbs: yes exactly. often, peers are using a lot of resources simply because they are the most helpful peer 10:11 < pseudoramdom> Is it possible for attacker to game the DDoS scoring? 10:11 < monlovesmango> rather than having a common pool of orphans, this pr will restructure orphanage to track orphanage counts and usage by peer. each peer will be subject to a orphan announcment count limit that is the global max announcement count divided by the number of peers, and allowed a set amount of weight for the orphans they announce 10:12 < glozow> This was why I originally thought of doing a "token bucket" approach where we'd allow peers resources based on an amount of tokens, and then either replenished tokens if the orphans were useful or destroyed them if it was just spam 10:13 < glozow> The new algorithm evicts announcements instead of transactions. What is the difference and why does it matter? 10:13 < monlovesmango> pseudoramdom: I was thinking about this too. I think if you flood a node with peers with counts that are just over the limit you could theoretically evict a high weight tx from a peer with high weight announcements but low announcement count 10:13 < glozow> marcofleon: yeah, you might as well use the space 10:14 < marcofleon> annoucements are wtxid, peer. so if a peer is misbehaving then the orphan will only be removed for that peer. So a peer can't affect the orphan announcments of other peers 10:14 < monlovesmango> but i'm not sure thats really too much of a concern..? 10:15 < glozow> pseudoramdom: do you mean for an attacker to try to get us to evict a specific orphan? or to appear less resource-intensive than another peer and get them chosen for eviction instead? 10:15 < glozow> marcofleon: yes bingo! 10:16 < pseudoramdom> I was thinking of latter. Staying just under the limit but still managing to evict certain orphans. 10:16 < glozow> I think this relates to pseudoramdom's question: if they did something tricky to try to get a particular orphan of theirs chosen for eviction, that's fine, because we'll keep the transaction as long as another peer has announced it 10:17 < marcofleon> as long as you have at least one honest peer 10:18 < marcofleon> the orphan should (hopefully) remain in the orphanage 10:18 < glozow> pseudoramdom: the other peer won't experience eviction unless they exceed the limits. This still presents a limitation - peers might get evicted if they're just sending stuff at a far faster rate than we manage to process them - but the point is you can't influence the eviction of another peer's announcements 10:18 < monlovesmango> any peer that stays under peer limits can't have their orphan evicted by another peer 10:18 < glozow> monlovesmango: correct 10:19 < glozow> Why is there an announcement “limit” but a memory “reservation”? 10:19 < sipa> hi! 10:19 < glozow> sipa: hello hello 10:20 < glozow> Actually, I feel like you can call them both reservations, haha 10:21 < monlovesmango> I think bc announcement count affects CPU usage? and its not much of a concern to allocate a certain amount of memory to each peer. guessing here, think I read something like that in the PR notes 10:21 < sipa> glozow: both have a global limit, and a per-peer reservation 10:21 < marcofleon> but one can be exceeded and the other is a decreasing share of the pie 10:21 < sipa> the difference is the announcement global limit is a constant, but the global memory limit is a function of the number of peers 10:22 < sipa> so the "constant" is a global announcement limit, and per-peer memory reservation 10:22 < marcofleon> or i think we make the assumption that there is more memory that can be used up to a certain point, but for announcements we're trying to figure out which peer is "overusing" their share 10:22 < glozow> sipa: I wouldn't call it an announcement "reservation" though, because you aren't guaranteed it. if more peers appear, your announcement limit decreases. 10:22 < monlovesmango> hahah so many ways to state the same things 10:22 < glozow> On the other hand, your memory reservation is guaranteed and constant no matter how the peer set changes 10:22 < sipa> glozow: that just means the reservation is dynamic :p 10:23 < sipa> but yeah, the term reservation is weird in that context 10:24 < marcofleon> memory reservation is guaranteed per peer yes? 10:24 < marcofleon> oh yeah you said it above 10:24 < sipa> "Yes sir, your reservation for 4 tonight at FancyDining is confirmed." - "What do you mean my reservation was dropped to 3, because another group made a reservation?!" 10:24 < glozow> marcofleon: yes. you're also guaranteed a certain number of announcements, but it's dynamic 10:25 < glozow> sipa: yeah that's what I mean is weird about "reservation" 10:25 < sipa> fair enough 10:25 < sipa> We shall commence the bikeshedding for a better term now. 10:25 < glozow> How does the per-peer memory usage reservation change as the number of peers increases? How does the per-peer announcement limit change as the number of peers increases? 10:25 < marcofleon> the per peer memory usage doesn't change iiuc 10:26 < monlovesmango> wait, why is one a function of the number of peers and the other not? 10:26 < glozow> marcofleon: yes 💯 10:26 < marcofleon> but the per peer annoucement limit decreases? 10:26 < glozow> yup 10:26 < glozow> monlovesmango: indeed, why? 10:26 < glozow> What is the purpose of the announcement limit? 10:27 < monlovesmango> is it bc anncouncement count affects CPU usage? and so we want to limit this globally? 10:27 < glozow> monlovesmango: how does our "budget" for CPU usage change with more peers sending us orphans? 10:28 < sipa> monlovesmango: more specifically, the *global* announcement limit directly affects the *latency* of trimming announcements - it's not because we have more peers that we can tolerate a longer latency in processing transactions 10:28 < sipa> monlovesmango: but for memory usage, it is normal and expected that your maximum memory usage goes up with more peers - if you're memory-constrained, you should limit your number of peers anyway 10:28 < glozow> (the answer is it doesn't. we can't tolerate more announcements when we have more peers) 10:29 < sipa> (sorry if i spoiled it?) 10:29 < marcofleon> in LimitOrphans we're removing announcements one by one right? and so that's why we're using that limit as a proxy for cpu usage 10:29 < monlovesmango> sipa: thank you that answered my question 10:30 < monlovesmango> marcofleon: that is also a good point 10:30 < sipa> monlovesmango: yup, the number of iterations that loop in LimitOrphans scales directly with the *global* announcement limit 10:30 < monlovesmango> cool cool we can move on thanks all :) 10:30 < glozow> Why is it ok to remove orphan expiration? 10:31 < marcofleon> because we take care of oldest orphans now whenever we start evicting 10:31 < glozow> marcofleon: yes exactly, that's the intuition for why the number of announcements is the number we are interested in. not the number of unique orphans (which is what we used to limit) 10:31 < sipa> glozow: FWIW, have you benchmarked how long LimitOrphans can take? 10:32 < monlovesmango> bc the orhpanage now has other concrete metrics by which we can reliable evict orphans which guarantee the oldest and evicted first and orphans that are no longer needed are removed 10:32 < glozow> marcofleon: yep! but wait, doesn't this mean we can be holding on to orphans for days, or weeks, etc? 10:32 < monlovesmango> yes..? 10:33 < glozow> sipa: not since the rewrite. I can find my old benchmarks and run them. IIRC the `AddTx`s is what takes a really long time 10:34 < marcofleon> hmm yeah i guess we can hold onto it for a while now. as long as there's no conflicting txs that arrive in a block or something 10:34 < sipa> glozow: sure, but the reason for the existence of the global announcement limit is the time that LimitOrphans can take, not AddTx... so perhaps it's worth benchmarking, and seeing if we can perhaps tolerate a higher global announcement limit (or, otherwise, be sad to find out it needs to be reduced) 10:34 -!- pseudoramdom [~pseudoram@user/pseudoramdom] has quit [Remote host closed the connection] 10:34 < glozow> marcofleon: is it problematic? 10:35 < marcofleon> i don't think so, as long as limits aren't being exceeded, seems fine to me 10:36 < glozow> sipa: yeah definitely. I just wanted to add some context for anybody who looks at the old benchmarks. What do you think is an acceptable amount of time? 10:36 < marcofleon> unless i'm missing something... 10:36 < sipa> glozow: probably in the millisecond range? 10:36 < glozow> marcofleon: I agree with you 10:36 < glozow> sipa: 👍 10:37 < glozow> Should we also remove EraseForBlock and instead rely on eviction to remove orphans that confirm or conflict with a block? Why or why not? 10:38 < marcofleon> could maybe be worked out somehow, but feels like a separate thing 10:39 < marcofleon> so i would say no 10:39 < marcofleon> it's not the same reason that an orphan is being removed 10:39 < monlovesmango> I would also say no, bc otherwise the caller would have to have knowledge of what is in the orphanage and individually evict txs 10:40 < glozow> fwiw, I think the worst case for EraseForBlock is probably worse than LimitOrphans. But EraseForBlock happens on the scheduler thread so speed might not be as much of an issue 10:40 < sipa> glozow: it also costs an attacker mining a valid block 10:41 < glozow> sipa: is that true? You could look at what's in the projected next block and just create conflicting transactions with a lot of nonexistent utxos 10:42 < sipa> glozow: sure, but the victim will still never experience it more than once per block, which is expensive. good point though that it's not necessarily the attacker themselves that pay this cost 10:43 < glozow> right. it's not a very worthwhile attack imo 10:45 < glozow> And the benefit of freeing up this space is probably worth it 10:45 < glozow> What is the purpose of reimplementing TxOrphanageImpl using a boost::multi_index_container along with the eviction changes? 10:46 < marcofleon> it's easier on the eyes :) 10:46 < instagibbs> glozow we could just look for txid matches instead of scanning inputs :) 10:46 < monlovesmango> so that you can look up orphan announcements by either wtxid or peer? 10:47 < glozow> instagibbs: indeed. It would require adding a txid index, but maybe that's what we're evicting in practice anyway! Could measure what it looks like in the wild 10:47 < glozow> monlovesmango: we can already do that though! 10:47 < instagibbs> oh right, wtxid would be the thing on hand 10:48 < glozow> instagibbs: right but same thing, maybe we're always evicting exact block txns 10:48 < instagibbs> 👍 10:48 < sipa> i think it was me who suggested using a multi_index, and the reason was because i saw the older implementation was effectively implementing a multi-index inefficiently, by having separate data structures for per-peer and per-wtxid information about announcements 10:48 < monlovesmango> glozow: haha it was just my best guess, didn't actually get around to understanding boost::multi_index_container better 10:48 -!- Emc99 [~Emc99@212.129.79.217] has quit [Quit: Client closed] 10:49 -!- Emc99 [~Emc99@212.129.79.217] has joined #bitcoin-core-pr-reviews 10:49 < glozow> yeah it is the more natural data structure. I was also pleasantly surprised to realize that we only needed 2 indexes 10:49 < sipa> yeah, i was assuming we'd need 3 10:49 < sipa> nice find 10:50 < glozow> I've also been told many times that the existing `TxOrphanage` is hard to review 10:50 < glozow> so good to hear from marcofleon that it's easier this way 10:50 < glozow> What is a peer’s “DoS Score” and how is it calculated? 10:51 < monlovesmango> its the max bettween announcement_count/peer_announcement_limit and announcement_usage/peer_announcement_usage_reservation 10:52 < sipa> monlovesmango: think of an SQL database with multiple indexes on various columns, but then in-memory entirely, and in a C++y way; it's more efficient (both in memory and CPU) than having multiple independent maps (one for each index), and much easier to keep consistent (because there is no way for the different indexes to contain different information) 10:52 < marcofleon> maximum of cpu score and memory score. cpu score is a peers number of announcments / their per peer limit. and memory score is sum of the weight of a peers announced tx weights / the reserved memory usage per peer 10:52 < glozow> monlovesmango: marcofleon: yep. how does this compare to having 2 separate scores, and trimming "while CPU score is exceeded or memory score is exceeded" ? 10:54 < marcofleon> hmmm well a peer can have a dos score of more than 1 10:54 < marcofleon> or maybe i'm confused with the q 10:54 < monlovesmango> sipa: that helps my conceptual understanding a lot thanks 10:55 < monlovesmango> glozow: this is much simplier for sure 10:55 < glozow> Er, my point was "it's the same thing" 10:55 < sipa> i think the question is: why do we have a *single* DoS score = max(mem_score, ann_score), as opposed to two different DoS scores that are never compared with one another, and a rule "trim the worst announcement offenders while there are any" + 'trim the worst memory offenders while there are any" 10:55 < monlovesmango> bc we only have to track one score rather than two 10:56 < marcofleon> wait this is actually a good question, i'm not immediately seeing what the benefit of one score is over two 10:56 < marcofleon> is it more gameable somehow? 10:56 < monlovesmango> i think this way also allows more the advantage of allowing peers to exceed limits/reservations 10:56 < sipa> i don't think the two approaches are equivalent, fwiw, but the difference is small 10:57 < monlovesmango> as long as global limits are not reached 10:58 < marcofleon> hmm so global limits reached, we get dos scores and target a peer based on that 10:59 < glozow> So we're comparing this approach to having 2 loops. "While global announcement limit is exceeded, pick the peer with the most announcements, evict from them. Then, while global memory limit is exceeded, pick the per with the most memory usage, evict from them." 10:59 < monlovesmango> well practically speaking, usually only one limit will be reached at a time so it would usually only be one loop no? 10:59 < glozow> This approach basically rolls them into 1 loop. "While global announcement or memory limit is exceeded, pick the peer with the highest score (max ratio of both) and evict from them." 11:00 < Emc99> What is dos? 11:00 < instagibbs> denial of service 11:00 < Emc99> Thanks 11:00 < glozow> oh oops we are out of time! 11:00 < marcofleon> is the two loop approach worse in some other way i'm not seeing other than it's two loops 11:01 < marcofleon> thanks for hosting and answering qs glozow! good stuff as usual 11:01 < glozow> fwiw, I think that having a ratio-based score is good if we want to consider giving different peers different reservation amounts 11:01 < monlovesmango> one small flaw with this approach is that if count limit is reached first, the highest DOS peer might actually be violating the memory reservation and removing it won't immediately resolve the global limit being exceeded 11:02 < monlovesmango> thanks for hosting glozow!! 11:02 < glozow> #endmeeting 11:02 < corebot`> glozow: Meeting ended at 2025-06-18T18:02+0000 11:02 < corebot`> glozow: Raw log: https://achow101.com/ircmeetings/2025/bitcoin-core-pr-reviews.2025-06-18_17_00.log.json 11:02 < corebot`> glozow: Formatted log: https://achow101.com/ircmeetings/2025/bitcoin-core-pr-reviews.2025-06-18_17_00.log.html 11:02 < corebot`> glozow: Minutes: https://achow101.com/ircmeetings/2025/bitcoin-core-pr-reviews.2025-06-18_17_00.html 11:02 -!- Emc99 [~Emc99@212.129.79.217] has quit [Quit: Client closed] 11:03 < marcofleon> i gotta run, wish i could stay more to ask/talk. Another time! 11:03 < glozow> thanks for reviewing! 11:04 < glozow> monlovesmango: yes 11:04 -!- Talkless [~Talkless@138.199.6.197] has joined #bitcoin-core-pr-reviews 11:15 -!- jonatack [~jonatack@user/jonatack] has joined #bitcoin-core-pr-reviews 11:16 -!- monlovesmango [~monlovesm@79.127.136.108] has quit [Quit: leaving] 11:31 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 12:23 -!- jonatack [~jonatack@user/jonatack] has quit [Ping timeout: 276 seconds] 12:24 -!- Talkless [~Talkless@138.199.6.197] has quit [Quit: Konversation terminated!] 13:06 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has joined #bitcoin-core-pr-reviews 15:28 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has quit [Quit: grettke] 16:09 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has joined #bitcoin-core-pr-reviews 16:50 -!- Zenton [~Zenton@user/zenton] has joined #bitcoin-core-pr-reviews 17:02 -!- Zenton [~Zenton@user/zenton] has quit [Ping timeout: 252 seconds] 17:29 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has quit [Quit: grettke] 17:41 -!- Zenton [~Zenton@user/zenton] has joined #bitcoin-core-pr-reviews 20:17 -!- robszarka [~szarka@2603:3003:4eac:100:39ac:1176:703a:ac6f] has joined #bitcoin-core-pr-reviews 20:20 -!- szarka [~szarka@2603:3003:4eac:100:850b:8664:1549:f92c] has quit [Ping timeout: 260 seconds] 20:23 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has joined #bitcoin-core-pr-reviews 20:43 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has quit [Quit: grettke] 22:42 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has joined #bitcoin-core-pr-reviews 22:46 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Remote host closed the connection] 23:15 -!- grettke [~grettke@syn-184-055-133-000.res.spectrum.com] has quit [Quit: grettke] 23:16 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 23:21 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 272 seconds] 23:35 -!- kevkevin [~kevkevin@209.242.39.30] has joined #bitcoin-core-pr-reviews 23:48 -!- kevkevin [~kevkevin@209.242.39.30] has quit [Ping timeout: 260 seconds] --- Log closed Thu Jun 19 00:00:17 2025