>> It would usually be prudent to store this recovery address with every key to the vault, ... > Worth noting now that in OP_VAULT the recovery path can be optionally gated by an arbitrary scriptPubKey. Gating by a scriptPubKey solves the problem I was talking about there. However, thinking about it more, I realized doing this basically turns OP_VAULT into something able to do general covenants. By making `unvault-target-hash` unsatisfiable (set to some random number that isn't derived from a hash) the delay wouldn't matter, but arbitrary conditions can be set on spending the utxo to the "recovery address" which could be another OP_UNVAULT destination. It seems like that could be used as a general CTV-like covenant. >> Wouldn't it be reasonably possible to allow recovery outputs with any >> recovery address to be batched, and the amount sums sent to each to be >> added up and verified? > I think the space savings from this is pretty negligible Besides space savings, there's the consideration of the usability of the vault and downstream code complexity. One of the criteria I designed the "efficient wallet vaults" opcodes for is that the vault spend should be spendable in situations where a normal output is spendable as much as possible. Having a constraint that prevents one type of otherwise spendable output from being combined with another type would add complexity to all downstream code which now has to have special cases - either a simple error and recovery if they just want to disallow that type of opcode being combined with other types (which may degrade the user experience by asking them to provide a different utxo or do an unvaulting first), or some kind of special case handling to make it work. Any kind of hybrid wallet incorporating a vault (eg a wallet that combines a vault and a hot address or lightning channel, kind of like Phoenix combines a lightning channel and a normal onchain wallet) would need to deal with this kind of extra complexity during utxo selection. Are there currently any situations where one otherwise-spendable utxo can't be mixed with another? If not, this added edge case deserves some extra consideration I think. > I can imagine there might be a mechanism where you include a payout output to some third party in a drafted unvault trigger transaction, and they provide a spend of the ephemeral output. I agree that's doable. I just think it merits some consideration as to whether that complexity (both for downstream code and for users) is a favorable trade off vs having a solution to reasonably bound fees spendable from the vault. Consider the case where a self-custodying user would have a small set of keys (2? 3?) and use all those keys to secure their vault, and just 1 of them to secure their hot wallet. It doesn't seem an implausible case and I could imagine that kind of set up becoming quite common. In such a case, if the hot wallet key is stolen, it means one vault key is also stolen and the hot wallets funds could be stolen at the same time as an unvaulting is triggered. The need to figure out how to coordinate a 3rd party's help to recover is at best an added difficulty and delay. An alternative would be to keep a completely separate hot wallet key that isn't use as part of the vault. But because key storage is by far the most difficult and costly part of self-custody, every additional key that needs to be stored is a significant additional burden (that's one of the benefits of wallet vaults - fewer seeds needed for a given amount of security/redundancy). Another alternative would be to have a hot wallet that for its primary spend-path uses a memory-only passphrase on one of the vault seeds (so compromise of a vault seed won't compromise the hot wallet) and has a recovery spend path that uses multiple (or all) vault seeds to recover if you forget the passphrase. It certainly seems like something can be worked out here to make the end user experience reasonable, but the additional operational complexity this would entail still deserves consideration. >> OP_BEFOREBLOCKVERIFY > I think this breaks fundamental reorgability of transactions. I discuss this in the Reorg Safety section here . >> This is done by using a static intermediate address that has no values >> that are unique to the particular wallet vault address. > Does mean .. that (ii) .. in order to be batch unvaulted [with dynamic unvaulting targets], vaulted > coins need to first be spent into this intermediate output? It does support dynamic unvaulting using OP_PUSHOUTPUTSTACK , which adds data to an output that carries over to the execution when spending the output created by a utxo that uses OP_PUSHOUTPUTSTACK. So the design is done such that once the intermediate output has been confirmed and the unvaulting delay has passed, it is then fully owned by the recipient without a second transaction (because of the use of OP_BEFOREBLOCKVERIFY). If OP_BEFOREBLOCKVERIFY is deemed to be unacceptable, then the intermediate output is fully intermediate and a 2nd transaction would be required to get it to its committed recipient. > it'd be valuable to see a full implementation While OP_BEFOREBLOCKVERIFY can be dropped with only somewhat minor degraded usability, OP_PUSHOUTPUTSTACK is necessary for the proposal to work as intended. I would want to see some support for the high-level concepts it introduces before spending significant time on an implementation. It does something fundamentally new that other opcodes haven't done: add "hidden" data onto the output that allows for committing to destination addresses. Maybe something along the line of Greg Sanders' suggestion for your proposal could replace the need for this, but I'm not sure its possible with how OP_CD is designed. On Wed, Jan 18, 2023 at 5:38 PM James O'Beirne wrote: > > I don't see in the write up how a node verifies that the destination > > of a spend using an OP_VAULT output uses an appropriate OP_UNVAULT > > script. > > It's probably quicker for you to just read through the > implementation that I reference in the last section of the paper. > > > https://github.com/bitcoin/bitcoin/blob/fdfd5e93f96856fbb41243441177a40ebbac6085/src/script/interpreter.cpp#L1419-L1456 > > > It would usually be prudent to store this recovery address with every > > key to the vault, ... > > I'm not sure I really follow here. Worth noting now that in OP_VAULT the > recovery path can be optionally gated by an arbitrary scriptPubKey. > > > This is rather limiting isn't it? Losing the key required to sign > > loses your recovery option. > > This functionality is optional in OP_VAULT as of today. You can specify > OP_TRUE (or maybe I should allow empty?) in the to > disable any signing necessary for recovery. > > > Wouldn't it be reasonably possible to allow recovery outputs with any > > recovery address to be batched, and the amount sums sent to each to be > > added up and verified? > > I think the space savings from this is pretty negligible, since you're > just saving on the transaction overhead, and it makes the implementation > decently more complicated. One benefit might be sharing a common > fee-management output (e.g. ephemeral anchor) across the separate vaults > being recovered. > > > If someday wallet vaults are the standard wallet construct, people > > might not even want to have a non-vault wallet just for use in > > unvaulting. > > If you truly lacked any non-vaulted UTXOs and couldn't get any at a > reasonable price (?), I can imagine there might be a mechanism where you > include a payout output to some third party in a drafted unvault trigger > transaction, and they provide a spend of the ephemeral output. > > Though you do raise a good point that this construction as written may > not be compatible with SIGHASH_GROUP... I'd have to think about that > one. > > > Hmm, it seems inaccurate to say that step is "skipped". While there > > isn't a warm wallet step, its replaced with an OP_UNVAULT script step. > > It is "skipped" in the sense that your bitcoin can't be stolen by having > to pass through some intermediate wallet during an authorized withdrawal > to a given target, in the way that they could if you had to prespecify > an unvault target when creating the vault. > > > --- > > > > My proposal for efficient wallet vaults was designed to meet all of > > those criteria, and allows batching as well. > > Probably a discussion of your proposal merits a different thread, but > some thoughts that occur: > > > > [from the README] > > > > OP_BEFOREBLOCKVERIFY - Verifies that the block the transaction is > > within has a block height below a particular number. This allows a > > spend-path to expire. > > I think this breaks fundamental reorgability of transactions. I think > some of the other opcodes, e.g the one that limits fee contribution on > the basis of historical feerate, are going to be similarly > controversial. > > > This is done by using a static intermediate address that has no values > > that are unique to the particular wallet vault address. > > Does mean either that (i) this proposal doesn't have dynamic unvaulting > targets or, (ii) if you do, in order to be batch unvaulted, vaulted > coins need to first be spent into this intermediate output? > > It sounds like (ii) is the case, given that your unvault target > specification lives in (I think?) the witness for the spend creating the > intermediate output. > > If the intermediate address doesn't have any values which are unique to > a particular vault, how do you authorize recoveries from it? > > --- > > Personally I think if you'd like to pursue your proposal, it'd be > valuable to see a full implementation. Might also make it easier to > assess the viability of the proposal. >