--- Log opened Wed Dec 18 00:00:40 2019 00:51 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 252 seconds] 03:06 -!- Tierra81Hane [~Tierra81H@ns334669.ip-5-196-64.eu] has joined #rust-bitcoin 03:34 -!- andytoshi [~apoelstra@unaffiliated/andytoshi] has quit [Read error: Connection reset by peer] 03:46 < orlovsky> stevenroose: I have done wtxid and necessary unit test, so PR is ready for the review again 03:46 -!- orlovsky [~dr-orlovs@194.230.155.171] has quit [Quit: Textual IRC Client: www.textualapp.com] 03:46 -!- dr-orlovsky [~dr-orlovs@194.230.155.171] has joined #rust-bitcoin 03:47 < dr-orlovsky> Travis is completely broken: it fails to build in 50-75% of times at the stage of apt install etc, i.e. before the actual compilation. I have tried more than 10 times, and still can't get through... 03:48 -!- Tierra81Hane [~Tierra81H@ns334669.ip-5-196-64.eu] has quit [Ping timeout: 252 seconds] 03:48 -!- ghost43 [~daer@gateway/tor-sasl/daer] has joined #rust-bitcoin 04:52 < stevenroose> dr-orlovsky: I noticed. Let's give it a day of rest and try tomorrow :D Btw, I commented a commit that I did with some fixes for the merkle root stuff. Please see if you like it. Feel free to just apply it in an extra commit. 04:55 < stevenroose> andytoshi, BlueMatt, dongcarl[m] and others: is it worth it to add some complexity to the merkle root calculation code if it could reduce the allocations to a single allocation of 50% of all the hashes combined (currently it allocates all the hashes, and then allocates half, then 1/4, 1/8..) 05:14 < stevenroose> ah turned out quite nice and elegant 05:17 < stevenroose> dr-orlovsky: if it's ok with you, I'd prefer to just push those two commits to your PR, have others review them and get the PR out as a whole 05:19 < stevenroose> Passes all the tests locally. I don't know who wrote the original merkle root code but it allocated a new vector on every recursive layer.. 05:23 -!- dpc [dpcmatrixo@gateway/shell/matrix.org/x-xpdhahjwypbbqwrv] has quit [Ping timeout: 252 seconds] 05:23 -!- dongcarl[m] [dongcarlca@gateway/shell/matrix.org/x-ojmddeyaitrdjeon] has quit [Ping timeout: 252 seconds] 05:26 < dr-orlovsky> stevenroose: thank you, I'm completely ok with that 05:36 < stevenroose> dr-orlovsky: I think for that I need commit access to your fork 05:36 < stevenroose> GitHub doesn't have GitLab's feature "allow contributors to add commits to this PR", apparently. 06:23 -!- Kiminuo [~mix@213.128.80.60] has quit [Ping timeout: 268 seconds] 06:25 < dr-orlovsky> stevenroose: how can I provide you with that? 06:25 < dr-orlovsky> or may be you can just make a PR into my branch from your account? 06:25 < dr-orlovsky> like clone `hashtypes` branch from into your fork, change it, and propose PR back - I will just accept it 06:46 < stevenroose> dr-orlovsky: sure, https://github.com/pandoracore/rust-bitcoin/pull/1 06:47 < dr-orlovsky> done! 06:54 < stevenroose> ok so now let's look at Travis again 06:54 < stevenroose> eeeh, you coulnd't have done `git merge --ff-only`? That merge commit is really not needed.. 06:57 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has quit [Read error: Connection reset by peer] 06:59 < stevenroose> dr-orlovsky: it looks like Travis is working today. Could you remove the merge commit? Then I think the PR might finally be ready for merging, so we can ask a few more reviews. 07:04 < dr-orlovsky> hm, what do you mean by "remove merge commit"? In this case it will also remove all of your changes... 07:08 -!- gribble [~gribble@unaffiliated/nanotube/bot/gribble] has joined #rust-bitcoin 07:23 < dr-orlovsky> stevenroose: and Travis continues to fail... 07:28 < stevenroose> dr-orlovsky: don't close/open the issue, I cna restart the job until it works :D 07:29 < dr-orlovsky> perfect 07:30 -!- Kiminuo [~mix@213.128.80.60] has joined #rust-bitcoin 07:35 < dr-orlovsky> I noticed that Travis is constantly failing both beta and nightly builds on the stage "creating directory /home/travis/.cache/sccache" 07:35 < dr-orlovsky> it just not passes through and results in timeout 07:35 -!- dpc [dpcmatrixo@gateway/shell/matrix.org/x-tvcdgoiqlvsdcilz] has joined #rust-bitcoin 07:35 -!- dongcarl[m] [dongcarlca@gateway/shell/matrix.org/x-ckbgeasavffjmhdd] has joined #rust-bitcoin 07:46 -!- dongcarl[m] [dongcarlca@gateway/shell/matrix.org/x-ckbgeasavffjmhdd] has quit [Read error: Connection reset by peer] 07:46 -!- dpc [dpcmatrixo@gateway/shell/matrix.org/x-tvcdgoiqlvsdcilz] has quit [Remote host closed the connection] 08:11 < stevenroose> dr-orlovsky: hmm, I cleared the caches and retried 08:12 < stevenroose> everyone else: the newtypes pr (last one for release IIRC) should be good for review: https://github.com/rust-bitcoin/rust-bitcoin/pull/349 08:51 < stevenroose> success! 08:54 < stevenroose> dr-orlovsky: please still remove the merge commit, though 09:02 -!- dpc [dpcmatrixo@gateway/shell/matrix.org/x-iilvzzsgjancbebs] has joined #rust-bitcoin 09:02 -!- dongcarl[m] [dongcarlca@gateway/shell/matrix.org/x-fcykwvmbaqfkvzmv] has joined #rust-bitcoin 09:15 -!- andytoshi [~apoelstra@wpsoftware.net] has joined #rust-bitcoin 09:15 -!- andytoshi [~apoelstra@wpsoftware.net] has quit [Changing host] 09:15 -!- andytoshi [~apoelstra@unaffiliated/andytoshi] has joined #rust-bitcoin 09:19 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has joined #rust-bitcoin 09:29 < dr-orlovsky> stevenroose: sorry, I just don't get it: if I remove it, it will remove also all of your changes 09:41 -!- fiatjaf_ [~fiatjaf@162.243.220.95] has quit [Quit: ~] 09:54 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has joined #rust-bitcoin 10:05 < dr-orlovsky> sorry, got it, I just had to use local git and point branch head to your last commit 10:09 -!- guest534543 [~mix@213.128.80.60] has joined #rust-bitcoin 10:11 -!- Kiminuo [~mix@213.128.80.60] has quit [Ping timeout: 258 seconds] 10:18 -!- guest534543 [~mix@213.128.80.60] has quit [Quit: Leaving] 10:18 -!- Kiminuo [~mix@213.128.80.60] has joined #rust-bitcoin 10:20 < Kiminuo> dpc, https://www.diffchecker.com/8OMeUCb8 - [wip] - two algos for duplicate input 10:20 < Kiminuo> bench for 100_000 items: 10:20 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 7,236,990 ns/iter (+/- 211,757) 10:20 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 430,859 ns/iter (+/- 26,654) 10:21 < Kiminuo> bench for 10 items (meaning ten TxIns): 10:21 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 500 ns/iter (+/- 21) 10:21 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 9 ns/iter (+/- 0) 10:23 < dpc> Kiminuo: Can one clone an iterator? Shoot ... I don't know how that works exactly. Is is a deep clone, or shallow clone? 10:23 < dpc> Anyway, I wouldn't do it via iterator anyway. 10:23 < Kiminuo> are_inputs_duplicate_* functions accepts iterator so that the functions are reasonably general. One would like more vector because it's indexable by numbers 10:24 < Kiminuo> dpc, well, i'm not sure, I don't know rust well enough 10:24 < Kiminuo> some kind of memory bench would discover that 10:24 < dpc> I don't think how it is implemented is correct. You would have to clone this copy in loop over and over, otherwise ... I think it will be always empty . 10:25 < dpc> I would just do it directly on on the `Tx`, two nested loops, so that you can do `for i in 0..inputs.len() for j in i..inputs.len() {` 10:25 < andytoshi> you can clone most iterators 10:25 < andytoshi> i don't understand what a "shallow clone" would be 10:25 < dpc> This way it is `n * (n-1) / 2` instead of `n*n` 10:25 < andytoshi> the resulting two iterators would be independent 10:26 < andytoshi> the clone would be very cheap. most iterators would be Copy actually, except that this would be surprising to users who might copy things by accident 10:27 < dpc> And the `v1 as *const TxIn != v2 as *const TxIn) ` seems wasteful if you know it always is the same tx. 10:27 < Kiminuo> dpc, `for i in 0..inputs.len() for j in i..inputs.len() {` <- yeah, i know. It's just on TxIn level, it's easier to test. But I can adjust it. No problem 10:28 < dpc> Right now the speedup is like 20x which is too good? 10:28 < Kiminuo> >I don't think how it is implemented is correct. You would have to clone this copy in loop over and over, otherwise ... I think it will be always empty . >> I know how to verify that, mmot 10:28 < Kiminuo> *mmt 10:29 < dpc> Kiminuo: `inputs2` never gets rewinded back, so it makes all of the code effectively O(n) 10:29 < dpc> Which would explain too good perf. 10:29 < Kiminuo> might be 10:29 < Kiminuo> yeah 10:30 < Kiminuo> yeah, correct 10:30 < Kiminuo> okey, then, I'll try just change iterators to vectors, to bench it 10:30 < dpc> Also, you would have to parametrize the benchmark somehow to find the cutoff point. I would be guessing that around 32 or 64 inputs quadratic algorithm will start to become slow, but I pulled it out my butt, so it as well might be 8 or 1024. :D 10:31 < dpc> I think if you just move the ` let mut inputs2 = inputs.clone();` into the inner loop ... then it is effective that. 10:31 < dpc> *effectively 10:32 < Kiminuo> trying not to picture that 10:32 < Kiminuo> :-P 10:32 < dpc> There's also no need to compare with your own self, so you would need `let _ = inputs.next()`. I guess otherwise it will always fail anyway. 10:34 < dpc> Maybe compiler will optimize out that `v1 as *const TxIn != v2 as *const TxIn` comparision. Why do you need to cast to `*const`? That would ... compare the pointers or something? Which will fail miserably if you have to values that are the same, but in different memory location, no? 10:35 < dpc> Oh ... I'm just silly. This is comparing the txid of the input. Silly, silly me. 10:36 < dpc> Anyway - we want byte by byte comparison, I believe, no funky *const`s. The comparision will be false 255/256 of times on the first byte anyway. 10:36 < dpc> So pointer comparison wouldn't be faster anyway. 10:37 < Kiminuo> that comparison was there to not report that given element is equal to itself. nothing else 10:39 < dpc> I don't understand why would it be equal in the first place, but I think it is wrong. It will do pointer comparison, no? 10:39 < dpc> You take a &TxId and turn it into *TxId and it seems to me that `==` on `*TxId` will compare the pointer, not the content of `TxId`. 10:40 < Kiminuo> i see, that was wrong 10:41 < dpc> `*const` looks so foreign to me now, that I forgot what it was 10:41 < Kiminuo> https://www.diffchecker.com/AdCMSslz 10:43 < Kiminuo> and the output is: 10:43 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 532 ns/iter (+/- 433) 10:43 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 23 ns/iter (+/- 7) 10:43 < dpc> Kiminuo: `previous_input` compares both the txid and vidx, right? 10:43 < Kiminuo> it should, yes 10:43 < Kiminuo> It's an Outpoint 10:43 < dpc> `dup_inputs_data` has how many inputs? 10:44 < dpc> The code looks correct then, but it is still 20x ? 10:44 < Kiminuo> it's hidden on line 773: for _n in 1..10 { -> so 10 inputs 10:45 < dpc> Ok... Nice. 10:45 < Kiminuo> well, when I increase that number to let's say 1000 i would not expect the same result 10:45 < dpc> Can you try something big? 256? :D 10:45 < Kiminuo> so let's try it 10:45 < Kiminuo> yeah 10:46 < Kiminuo> 256: 10:46 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 10,687 ns/iter (+/- 4,068) 10:46 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 636 ns/iter (+/- 265) 10:47 < dpc> Naaaah... :D 10:47 < dpc> We messed up something? :D 10:47 < dpc> Try 16 k? :D 10:47 < Kiminuo> well, looks very suspicious 10:47 < Kiminuo> tryin 10:48 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 1,114,070 ns/iter (+/- 88,187) 10:48 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 47,620 ns/iter (+/- 2,421) 10:48 < dpc> Oh. I know! 10:49 < dpc> You are benchamarking the vec that has a duplicate! 10:49 < dpc> And it so happens that linear will find it very fast. 10:49 < dpc> We should benchmark the exhaustive case, not the easy one. 10:49 < Kiminuo> yeah, right 10:50 < dpc> Too good to be true, had to be not true. 10:52 < Kiminuo> 16k is quite a lot, given how much noisy my computer fan is 10:52 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 1,108,076 ns/iter (+/- 41,589) 10:52 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 186,467,650 ns/iter (+/- 14,803,609) 10:52 < dpc> Nice and slow. :D 10:52 < Kiminuo> finally, some reasonable results 10:52 < dpc> Back to 16? :D 10:53 < Kiminuo> no, back to 256 first :) 10:53 < dpc> All right. :) 10:53 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 10,683 ns/iter (+/- 2,193) 10:53 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 32,738 ns/iter (+/- 2,518) 10:54 < stevenroose> I missed the conversation, is there an updated version of the code? there is a bug in the first link. 10:54 < Kiminuo> for 256 10:54 < dpc> Nice. Maybe even 64 will be even. :) 10:54 < stevenroose> .any() progresses the second iterator, so only the first element in the first iterator is actually checked 10:54 < stevenroose> ah I found a second link 10:55 < dpc> Kiminuo: You probably want to post the latest code. 10:55 < stevenroose> Kiminuo: &Vec is never used, you should instead use &[TxIn] 10:56 < dpc> I think the iterator based version (with `.clone()` will work as well, so you can convert to that for genericity, after we make sure the current-vec-based version works OK. 10:56 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 1,342 ns/iter (+/- 56) 10:56 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 511 ns/iter (+/- 19) 10:56 < Kiminuo> for 32 10:56 < dpc> Yay! 10:56 < Kiminuo> stevenroose, I'll publish what I have now and then I can modify 10:57 < stevenroose> I liked the double for approach, it makes it n * n/2 10:57 < Kiminuo> https://www.diffchecker.com/igi5YPAy 10:57 < dpc> stevenroose: That's what we're benchmarking now, yes. 10:58 < dpc> A hybrid version could use linear based on the iterator size hint. If missing or bigger than X (which we're tryint go estabilish) it could go conservative, otherwise linear. 10:59 < stevenroose> Kiminuo: also, why not use HashSet::with_capacity? 11:00 < stevenroose> instead of the reserve 11:00 < Kiminuo> I may try 11:00 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_hashset ... bench: 2,651 ns/iter (+/- 212) 11:00 < Kiminuo> test blockdata::transaction::tests::bench_duplicate_inputs_linear ... bench: 2,187 ns/iter (+/- 85) 11:00 < Kiminuo> for 64 11:00 < stevenroose> I guess if there is no clear indisputable best approach, it's better left to the user. 11:00 < stevenroose> I suppose when you're doing this on an entire block, you'll want to allocate a hashset once and use it for all transactions while clearing it in between txs 11:01 < dpc> Size hint of the upper-bound is a solid information. 11:02 < dpc> If the upper-bound is known and smaller that 64, it is safe to use linear, IMO. 11:02 < stevenroose> dpc: yeah but still if a user would want to use its own hashset in a real-world application, having the method is not as useful. 11:03 < stevenroose> also, when checking a block, you could do a single hashset for all inputs, not per tx 11:03 < dpc> stevenroose: As it is right now, the function takes an iterator, so it would allocate HashSet once if given iterator of all inputs in the block, no? 11:03 < dpc> I don't understand what are you concerned about. 11:04 < stevenroose> dpc: ah yeah sure if it's a top-level method it could, sure 11:04 < dpc> stevenroose: Do you want to reuse the `HashSet` between checks between many blocks? 11:05 < Kiminuo> I think Steven talks about the fact that are_inputs_duplicate_hashset does allocate HashSet and it may be shared in some scenarios 11:05 < stevenroose> But then it could be named `has_duplicate_items` and be made generic over all hashable types :D 11:05 < dpc> Kiminuo: shared between what exactly? 11:05 < Kiminuo> dpc, between many calls to that functions 11:05 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has quit [Remote host closed the connection] 11:06 < Kiminuo> the idea is that you would clear the hashset every time 11:06 < Kiminuo> instead of allocating it again and again 11:06 < stevenroose> dpc: f.e. one hashset that is created once for the entire length of the program, you can just clear it every time you verify a new block 11:06 -!- Dean_Guss [~dean@gateway/tor-sasl/deanguss] has joined #rust-bitcoin 11:06 < stevenroose> I don't know if that'd be significantly more efficient, but it might be 11:06 < dpc> You could take a `Option<&mut HashSet>` or something. Or have a version of that function that takes it. 11:07 < Kiminuo> stevenroose, i think it is. (in Java it's often used trick to improve performance) 11:07 < stevenroose> dpc: all I'm saying is that the actual logic is quite trivial and we can't agree on one way that is clearly the best way in all cases. so it might be to leave this quite trivial task up to the downstream user 11:08 < dpc> Java is slow. Also, I wonder if `.clear` on a hashset is not slower than allocating it again. 11:09 < dpc> The data in `HashSet` is stored inline, linearly. Seems to me that `.clear()` will have to memzero the whole vector that backs it. Which is not bad, but not a no-op-like. 11:09 < Kiminuo> dpc, well, ... off topic: java is used in hyper frequency algorithmic trading more and more ... sayin just to make you crazy ;) 11:10 < stevenroose> lol I just notice std::..::HashMap uses a hashbrown hashmap under the hood.. 11:10 < dpc> Kiminuo: In Java everything would stored as a pointer, so easier to clear, more expensive to access. 11:11 < stevenroose> I wonder if this is outdated then: https://github.com/Amanieu/hashbrown 11:11 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has quit [Quit: Sleep mode] 11:11 < dpc> stevenroose: AFAIR, it was ported into the stdlib. 11:12 < dpc> What I'm saying is just that allocating that hashmap vs clearing it, if it is very big, has no obvious perf winner to me. 11:12 < Kiminuo> stevenroose, "all I'm saying is that the actual logic is quite trivial and we can't agree on [...]" -> Who does use rust-bitcoin? 11:14 < stevenroose> dpc: I think clear is strictly better than allocating 11:14 < stevenroose> in both cases all the elements in the map have to be dropped, but not zeroed 11:14 < dpc> stevenroose: Maybe you're right. :) 11:14 < stevenroose> and in both cases, the index tables need to be initialized 11:14 < stevenroose> the only difference is allocating the memory again 11:15 < stevenroose> (dpc, I went to go check hashbrown's implementation) 11:16 < dpc> Just make `are_inputs_duplicate(iterator)` and `are_inputs_duplicate(iterator, &mut HashSet)`, make one call the other, job done. 11:16 < dpc> Still, I think it is worth to automatically use linear version if upper-bound on the iterator `<=64` or something like this. 11:18 < stevenroose> Kiminuo: I know I was the one originally suggesting the method, but I have to say right now I don't think it makes a lot of sense to have it. I think if I had to implement block/chain validation, I'd be happier writing those 5 lines manually and knowing what's going on and being able to optimize where I can. 11:20 < Kiminuo> stevenroose, It's fine by me. I'm just enjoying myself :) 11:20 < dpc> stevenroose: You still can, no? 11:20 < Kiminuo> Anyway, if the function `are_inputs_duplicate` were to be added, a better place might be some Utils.rs file. As input duplication checking seems like an algorithm on transaction/its inputs rather than "a property of Transaction struct" 11:22 < stevenroose> Kiminuo: yeah optimizing is fun (I did some optimizations to merkle root calculation today :p). In fact your code is a general duplicate checker, can be just as easily written over I think 11:23 < Kiminuo> :) 11:30 < stevenroose> dr-orlovsky: have a sec to apply one last diff? :) 11:30 < stevenroose> then I'll clear travis cache again and we're done :) 11:53 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has joined #rust-bitcoin 11:54 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has quit [Client Quit] 12:06 < dr-orlovsky> stevenroose: sure :) 12:10 < dr-orlovsky> stevenroose: it fails to build b/c you are using `as_hash` 12:12 < andytoshi> i published a minor as_hash version earlier today 12:12 < andytoshi> maybe you need to kick CI? 12:13 < dr-orlovsky> trying 12:13 < dpc> Kiminuo: duplication of inputs seems totally find as a method on a Tx or a Block, if you ask me. If you don't have it there, people will just have a bit harder time to find it elsewhere. Not a big deal one way or another. 12:16 < stevenroose> dr-orlovsky: I think my diff includes a Cargo.toml update as well, no? 12:16 < stevenroose> did I forget that? it's supposed to bump _hashes to .3 12:16 < stevenroose> ah I see it's missing 12:17 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has joined #rust-bitcoin 12:17 -!- michaelfolkson [~textual@2a00:23c5:be04:e501:99be:e649:5b10:dfc1] has quit [Client Quit] 12:17 < stevenroose> please also bump _hashes to 0.7.3 IIRC, sorry about that 12:17 < dr-orlovsky> done and pushed! 12:24 < Kiminuo> dpc, if it were in a separate file, maybe stevenroose would like it more. Anyway, the code is available. Either you find it valuable or not. I want to move to play with some other thing :) 12:25 < dpc> Kiminuo: where's the code exactly? 12:28 < stevenroose> dr-orlovsky: ok cool! I have an eye on Travis! I'll also do a final review and ACK. andytoshi got a minute? :) dpc and Kiminuo feel free to review as well 12:28 < stevenroose> https://github.com/rust-bitcoin/rust-bitcoin/pull/349 12:28 < Kiminuo> let me push that on my githu account 12:28 < Kiminuo> in the mean time: https://www.diffchecker.com/AdCMSslz#right-4 12:30 < Kiminuo> dpc, https://github.com/rust-bitcoin/rust-bitcoin/compare/master...kiminuo:feature/tx_has-duplicacate-inputs-01 12:32 < stevenroose> andytoshi: one tiny nit (don't worry, just ignore it) is that BitcoinHash and BitcoinHash not become kinda pointless as well. 12:32 < stevenroose> IMO might as well be removed later 12:33 < BlueMatt> stevenroose: ehh, when someone uses it for "real" use, sure, today, does anyone even use it? 12:36 < stevenroose> BlueMatt: what's that replying to? the duplicate check? 12:37 < BlueMatt> optimized merkle tree calculation 12:37 < BlueMatt> (also cause its a rabbit hole - you may want the avx version that bitcoin core uses to calculate two things at once) 12:38 < stevenroose> ah, it's already included in #349, it ended up a pretty big difference in number of allocations and only a few lines of code more (basically just creating 2 version of the calculation: 1 wiht a preallocated slice and one with an iterator) 12:38 < stevenroose> what's the avx thing? any reference/code link? 12:40 < BlueMatt> tl;dr: if you're hashing a merkle tree you have a lot of independant hashing, this is what simd exists for :) 12:40 < BlueMatt> but, sure, if its not too bad, gret! 12:40 < BlueMatt> a 12:42 < stevenroose> BlueMatt: https://github.com/stevenroose/rust-bitcoin/commit/bf24480aabe072d40637d14a74cace4be05b6d97 is the gist 12:44 < andytoshi> i don't have a minute .. i will in a couple hours (sorry for repeated delayse!) 12:45 < andytoshi> i'm very excited about this PR and the new rust-bitcoin release tho 12:45 < andytoshi> maybe we can get that out today 13:23 < Kiminuo> I like that #349 very much! 13:25 < Kiminuo> This caught my attention: https://github.com/rust-bitcoin/rust-bitcoin/pull/349/files#diff-8a3492fb2ab337ced7eb96f9120594a3R224 13:25 < Kiminuo> What is the relationship between TxMerkleRoot and TxMerkleBranch structs? 13:27 < Kiminuo> is it necessary to have both of them? 13:28 < Kiminuo> I have only basic understanding of merkle trees so it's a question rather than a suggestion 13:36 -!- vindard [~vindard@190.83.165.233] has quit [Quit: No Ping reply in 180 seconds.] 13:37 -!- vindard [~vindard@190.83.165.233] has joined #rust-bitcoin 13:48 -!- jtimon [~quassel@22.133.134.37.dynamic.jazztel.es] has joined #rust-bitcoin 14:09 -!- jtimon [~quassel@22.133.134.37.dynamic.jazztel.es] has quit [Remote host closed the connection] 14:30 -!- Kiminuo [~mix@213.128.80.60] has quit [Quit: Leaving] 14:38 < andytoshi> yeah, it's a bit questionable whether we want to distinguish between merkle root and branch 14:39 < andytoshi> dr-orlovsky: curious if you feel strongly about this? 14:42 < andytoshi> i commented on the PR 14:42 < andytoshi> dr-orlovsky: also, what timezone are you in? 15:48 < stevenroose> andytoshi what would the intermediates be then? the leaf type, root type or plain sha256d::Hash? I'd lean towards the latter. 15:48 < stevenroose> makes PartialMerkleTree where it's used more generic 15:49 < stevenroose> even though it would still commit to SHA-256-d. wouldn't want to get into the weeds of making PartialMErkleTree generic over T: Hash if it's not gonna be used in another use case for now 16:09 < andytoshi> the intermediates could be TxMerkleRoot 16:10 < andytoshi> or they could just be plain sha256d::Hash s 18:35 -!- dongcarl [~dongcarl@pool-108-21-84-253.nycmny.fios.verizon.net] has joined #rust-bitcoin 19:07 -!- jonatack [~jon@2a01:e0a:53c:a200:bb54:3be5:c3d0:9ce5] has quit [Ping timeout: 245 seconds] 23:20 -!- Kiminuo [~mix@213.128.80.60] has joined #rust-bitcoin --- Log closed Thu Dec 19 00:00:41 2019