Ah, I see, I didn't catch that this scheme relies on UTXO commitments (presumably with Mark's PATRICIA tree system?). If you're doing a binary search over block contents then does that imply multiple protocol round trips per synced block? I'm still having trouble visualising how this works. Perhaps you could write down an example run for me. How does it interact with the need to download chains rather than individual transactions, and do so without round-tripping to the remote node for each block? Bloom filtering currently pulls down blocks in batches without much client/server interaction and that is useful for performance. Like I said, I'd rather just junk the whole notion of chain scanning and get to a point where clients are only syncing headers. If nodes were calculating a script->(outpoint, merkle branch) map in LevelDB and allowing range queries over it, then you could quickly pull down relevant UTXOs along with the paths that indicated they did at one point exist. Nodes can still withhold evidence that those outputs were spent, but the same is true today and in practice this doesn't seem to be an issue. The primary advantage of that approach is it does not require a change to the consensus rules. But there are lots of unanswered questions about how it interacts with HD lookahead and so on.