Let me ask a simpler question. How do you prove the state of the UTXO database corresponding to your wallet? With my subchain method, all the addresses in a wallet can be constrained to a path of subchains, so the proof is O(log n). Yes, I know some people will say that it is not really a proof because I didn't verify the transactions involving sibling chains outside my path of chains. But the protocol is "parent chain always decides in case of conflict". And the parent chains will have an incentive to be careful with what child blocks they commit to as they will be merge mining the (direct) child chains. Yes, the parents can make a mistake with some really deep children chain transactions, but the deeper you go, the less value the transactions, and the less important. Also, the children of the parents are parents themselves so they will have incentive to be careful with what child chains they commit to. So recursively, the system takes care of itself.

I challenge anyone to come up with a <= O(log n) proof that each address (output) they have in their wallet really has the balance they think it has. If someone can do this, then maybe I will drop this idea. Actually, rusty asked this on #bitcoin-wizards last night and no one was able to answer it.

On Mon, Jun 15, 2015 at 6:00 PM, Andrew <onelineproof@gmail.com> wrote:
Pieter: I kind of see your point (but I think you're missing some key points). You mean just download all the headers and then just verify the transactions you filter out by using their corresponding merkle trees, right? But still, I don't think that would scale as well as with the tree structure I propose. Because, firstly, you don't really need the headers of the sibling chains. You just need the headers of the parent chains since the parent verifies all the siblings. All you really need in a typical (non-mining) situation is the headers or full blocks in one path going down the tree starting from the root chain. So that means O(log n) needs to be stored (headers or blocks) (n the number of transaction on the network). With big blocks, you still need O(n) headers. I know headers are small, but still they take up space and verification time. Also, since you are storing the full blocks on the chains you want, you are validating the headers of those blocks and you are sure that you are seeing all transactions on those blocks. And if certain addresses must stay on those blocks, you will know that you are catching all of the transactions corresponding to those blocks. If you just filter out based on addresses or other criteria, you can be denied some of those transactions by full nodes, and you may not know about it. Say for example, your government representative publishes on of his public addresses that is used for paying for expenses. Then with my system, you can be sure to catch every transaction being spent from that address (or UTXO or whatever you want to call it). If you just filter on any transaction that includes that address, you may not catch all of those transactions. Same with incoming funds.

There are also advantages for mining decentralization as I have explained in my previous posts. So still not sure you are right here...

Thanks

On Mon, Jun 15, 2015 at 5:18 PM, Mike Hearn <mike@plan99.net> wrote:

It's simple: either you care about validation, and you must validate everything, or you don't, and you don't validate anything.

Pedantically: you could validate a random subset of all scripts, to give yourself probabilistic verification rather than full vs SPV. If enough people do it with a large enough subset the probability of a problem being detected goes up a lot. You still pay the cost of the database updates.

But your main point is of course completely right, that side chains are not a way to scale up. 



--
PGP: B6AC 822C 451D 6304 6A28  49E9 7DB7 011C D53B 5647



--
PGP: B6AC 822C 451D 6304 6A28  49E9 7DB7 011C D53B 5647