Hi ZmnSCPxj, I do mean to have specialized computing power vendors, which could happen to be miners, or not. Optiming ZKP computations is rather different from Bitcoin mining so I expect those vendors to be from more research-driven teams focused in cryptographic engineering. I am open to whether to put those transactions in mempool or not. I apologize for giving an inaccurate number earlier about the verification cost. I just ran gnark-bench on my Mac M2, it turns out the cost for Groth16 verification could be as fast as 1ms. For Plonk it is around 1.6ms. So it seems even a common fullnode could handle thousands of OP_ZKP transactions. In that case, the ZKP transactions could be put into mempool, and be open to be aggregated by some vendor. Fullnodes should verify these transactions as well. It does not seem a good idea to treat them with special rules as there is no guarantee that certain OP_ZKP transactions will be aggregated or recursively verified. Of course, the weighting should be well benchmarked and calculated. The cost for those *standalone* OP_ZKP transactions might be higher due to more data and/or higher weighting. This incentivizes vendors to develop aggregation / recursive verification services to drive down the fee requirements and profit from doing so (fee extraction). I also expect to see an open market where various vendors can compete against each other, so it makes sense to have these transactions openly visible to all participants. Meanwhile, some transactions are meant to be off-chain. For example, a would-be smart contract can aggregate many related transactions in a OP_ZKP transaction. Those aggregated transactions should *not* be transmitted within the Bitcoin network. They could even be *not* valid Bitcoin transactions. Usually the smart contract operator or its community could host such a service. Consider a potential situation in a few years: there are thousands of active smart contracts based on OP_ZKP, and each block contains a few hundred OP_ZKP transactions, each one of them aggregates / recursively verifies many transactions. The effective TPS of the Bitcoin network could far exceed the current value, reaching the range of thousands or even more. Hope this clarifies. Weiji On Tue, May 2, 2023 at 11:01 PM ZmnSCPxj wrote: > > Good morning Weiji, > > > Meanwhile, as we can potentially aggregate many proofs or recursively > verify even more, the average cost might still be manageable. > > Are miners supposed to do this aggregation? > > If miners do this aggregation, then that implies that all fullnodes must > also perform the **non**-aggregated validation as transactions flow from > transaction creators to miners, and that is the cost (viz. the > **non**-aggregated cost) that must be reflected in the weight. > We should note that fullnodes are really miners with 0 hashpower, and any > cost you impose on miners is a cost you impose on all fullnodes. > > If you want to aggregate, you might want to do that in a separate network > that does ***not*** involve Bitcoin fullnodes, and possibly allow for some > kind of extraction of fees to do aggregation, then have already-aggregated > transactions in the Bitcoin mempool, so that fullnodes only need validate > already-aggregated transactions. > > Remember, validation is run when a transaction enters the mempool, and is > **not** re-run when an in-mempool transaction is seen in a block > (`blocksonly` of course does not follow this as it has no mempool, but most > fullnodes are not `blocksonly`). > If you intend to aggregate transactions in the mempool, then at the worst > case a fullnode will be validating every non-aggregated transaction, and > that is what we want to limit by increasing the weight of heavy-validation > transactions. > > Regards, > ZmnSCPxj >