Hi sipa, > The advantage of (a) is that it can be verified against a full block without > access to the outputs being spent by it > > The advantage of (b) is that it is more compact (scriot reuse, and outputs > spent within the same block as they are created). Thanks for this breakdown. I think you've accurately summarized the sole remaining discussing point in this thread. As someone who's written and reviews code integrating the proposal all the way up the stack (from node to wallet, to application), IMO, there's no immediate cost to deferring the inclusion/creation of a filter that includes prev scripts (b) instead of the outpoint as the "regular" filter does now. Switching to prev script in the _short term_ would be costly for the set of applications already deployed (or deployed in a minimal or flag flip gated fashion) as the move from prev script to outpoint is a cascading one that impacts wallet operation, rescans, HD seed imports, etc. Maintaining the outpoint also allows us to rely on a "single honest peer" security model in the short term. In the long term the main barrier to committing the filters isn't choosing what to place in the filters (as once you have the gcs code, adding/removing elements is a minor change), but the actual proposal to add new consensus enforced commitments to Bitcoin in the first place. Such a proposal would need to be generalized enough to allow several components to be committed, likely have versioning, and also provide the necessary extensibility to allow additional items to be committed in the future. To my knowledge no such soft-fork has yet been proposed in a serious manner, although we have years of brainstorming on the topic. The timeline of the drafting, design, review, and deployment of such a change would likely be measures in years, compared to the immediate deployment of the current p2p filter model proposed in the BIP. As a result, I see no reason to delay the p2p filter deployment (with the outpoint) in the short term, as the long lead time a soft-fork to add extensible commitments to Bitcoin would give application+wallet authors ample time to switch to the new model. Also there's no reason that full-node wallets which wish to primarily use the filters for rescan purposes can't just construct them locally for this particular use case independent of what's currently deployed on the p2p network. Finally, I've addressed the remaining comments on my PR modifying the BIP from my last message. -- Laolu On Sat, Jun 2, 2018 at 11:12 PM Pieter Wuille via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Sat, Jun 2, 2018, 22:56 Tamas Blummer via bitcoin-dev < > bitcoin-dev@lists.linuxfoundation.org> wrote: > >> Lighter but SPV secure nodes (filter committed) would help the network >> (esp. Layer 2) to grow mesh like, but add more user that blindly follow POW. >> >> On longer term most users' security will be determined by either trusted >> hubs or POW. >> I do not know which is worse, but we should at least offer the choice to >> the user, therefore commit filters. >> > > I don't think that's the point of discussion here. Of course, in order to > have filters that verifiably don't lie by omission, the filters need to be > committed to by blocks. > > The question is what data that filter should contain. > > There are two suggestions: > (a) The scriptPubKeys of the block's outputs, and prevouts of the block's > inputs. > (b) The scriptPubKeys of the block's outputs, and scriptPubKeys of outputs > being spent by the block's inputs. > > The advantage of (a) is that it can be verified against a full block > without access to the outputs being spent by it. This allows light clients > to ban nodes that give them incorrect filters, but they do need to actually > see the blocks (partially defeating the purpose of having filters in the > first place). > > The advantage of (b) is that it is more compact (scriot reuse, and outputs > spent within the same block as they are created). It also had the advantage > of being more easily usable for scanning of a wallet's transactions. Using > (a) for that in some cases may need to restart and refetch when an output > is discovered, to go test for its spending (whose outpoint is not known > ahead of time). Especially when fetching multiple filters at a time this > may be an issue. > > I think both of these potentially good arguments. However, once a > committed filter exists, the advantage of (a) goes away completely - > validation of committed filters is trivial and can be done without needing > the full blocks in the first place. > > So I think the question is do we aim for an uncommitted (a) first and a > committed (b) later, or go for (b) immediately? > > Cheers, > > -- > Pieter > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >